Posted on Leave a comment

TCP window scaling, timestamps and SACK

The Linux TCP stack has a myriad of sysctl knobs that allow to change its behavior.  This includes the amount of memory that can be used for receive or transmit operations, the maximum number of sockets and optional features and protocol extensions.

There are  multiple articles that recommend to disable TCP extensions, such as timestamps or selective acknowledgments (SACK) for various “performance tuning” or “security” reasons.

This article provides background on what these extensions do, why they
are enabled by default, how they relate to one another and why it is normally a bad idea to turn them off.

TCP Window scaling

The data transmission rate that TCP can sustain is limited by several factors. Some of these are:

  • Round trip time (RTT).  This is the time it takes for a packet to get to the destination and a reply to come back. Lower is better.
  • lowest link speed of the network paths involved
  • frequency of packet loss
  • the speed at which new data can be made available for transmission
    For example, the CPU needs to be able to pass data to the network adapter fast enough. If the CPU needs to encrypt the data first, the adapter might have to wait for new data. In similar fashion disk storage can be a bottleneck if it can’t read the data fast enough.
  • The maximum possible size of the TCP receive window. The receive window determines how much data (in bytes) TCP can transmit before it has to wait for the receiver to report reception of that data. This is announced by the receiver. The receiver will constantly update this value as it reads and acknowledges reception of the incoming data. The receive windows current value is contained in the TCP header that is part of every segment sent by TCP. The sender is thus aware of the current receive window whenever it receives an acknowledgment from the peer. This means that the higher the round-trip time, the longer it takes for sender to get receive window updates.

TCP is limited to at most 64 kilobytes of unacknowledged (in-flight) data. This is not even close to what is needed to sustain a decent data rate in most networking scenarios. Let us look at some examples.

Theoretical data rate

With a round-trip-time of 100 milliseconds, TCP can transfer at most 640 kilobytes per second. With a 1 second delay, the maximum theoretical data rate drops down to only 64 kilobytes per second.

This is because of the receive window. Once 64kbyte of data have been sent the receive window is already full.  The sender must wait until the peer informs it that at least some of the data has been read by the application. 

The first segment sent reduces the TCP window by the size of that segment. It takes one round-trip before an update of the receive window value will become available. When updates arrive with a 1 second delay, this results in a 64 kilobyte limit even if the link has plenty of bandwidth available.

In order to fully utilize a fast network with several milliseconds of delay, a window size larger than what classic TCP supports is a must. The ’64 kilobyte limit’ is an artifact of the protocols specification: The TCP header reserves only 16bits for the receive window size. This allows receive windows of up to 64KByte. When the TCP protocol was originally designed, this size was not seen as a limit.

Unfortunately, its not possible to just change the TCP header to support a larger maximum window value. Doing so would mean all implementations of TCP would have to be updated simultaneously or they wouldn’t understand one another anymore. To solve this, the interpretation of the receive window value is changed instead.

The ‘window scaling option’ allows to do this while keeping compatibility to existing implementations.

TCP Options: Backwards-compatible protocol extensions

TCP supports optional extensions. This allows to enhance the protocol with new features without the need to update all implementations at once. When a TCP initiator connects to the peer, it also send a list of supported extensions. All extensions follow the same format: an unique option number followed by the length of the option and the option data itself.

The TCP responder checks all the option numbers contained in the connection request. If it does not understand an option number it skips
‘length’ bytes of data and checks the next option number. The responder omits those it did not understand from the reply. This allows both the sender and receiver to learn the common set of supported options.

With window scaling, the option data always consist of a single number.

The window scaling option

 
Window Scale option (WSopt): Kind: 3, Length: 3
    +---------+---------+---------+
    | Kind=3  |Length=3 |shift.cnt|
    +---------+---------+---------+
         1         1         1

The window scaling option tells the peer that the receive window value found in the TCP header should be scaled by the given number to get the real size.

For example, a TCP initiator that announces a window scaling factor of 7 tries to instruct the responder that any future packets that carry a receive window value of 512 really announce a window of 65536 byte. This is an increase by a factor of 128. This would allow a maximum TCP Window of 8 Megabytes.

A TCP responder that does not understand this option ignores it. The TCP packet sent in reply to the connection request (the syn-ack) then does not contain the window scale option. In this case both sides can only use a 64k window size. Fortunately, almost every TCP stack supports and enables this option by default, including Linux.

The responder includes its own desired scaling factor. Both peers can use a different number. Its also legitimate to announce a scaling factor of 0. This means the peer should treat the receive window value it receives verbatim, but it allows scaled values in the reply direction — the recipient can then use a larger receive window.

Unlike SACK or TCP timestamps, the window scaling option only appears in the first two packets of a TCP connection, it cannot be changed afterwards. It is also not possible to determine the scaling factor by looking at a packet capture of a connection that does not contain the initial connection three-way handshake.

The largest supported scaling factor is 14. This allows TCP window sizes
of up to one Gigabyte.

Window scaling downsides

It can cause data corruption in very special cases. Before you disable the option – it is impossible under normal circumstances. There is also a solution in place that prevents this. Unfortunately, some people disable this solution without realizing the relationship with window scaling. First, let’s have a look at the actual problem that needs to be addressed. Imagine the following sequence of events:

  1. The sender transmits segments: s_1, s_2, s_3, … s_n
  2.  The receiver sees: s_1, s_3, .. s_n and sends an acknowledgment for s_1.
  3.  The sender considers s_2 lost and sends it a second time. It also sends new data contained in segment s_n+1.
  4.  The receiver then sees: s_2, s_n+1, s_2: the packet s_2 is received twice.

This can happen for example when a sender triggers re-transmission too early. Such erroneous re-transmits are never a problem in normal cases, even with window scaling. The receiver will just discard the duplicate.

Old data to new data

The TCP sequence number can be at most 4 Gigabyte. If it becomes larger than this, the sequence wraps back to 0 and then increases again. This is not a problem in itself, but if this occur fast enough then the above scenario can create an ambiguity.

If a wrap-around occurs at the right moment, the sequence number s_2 (the re-transmitted packet) can already be larger than s_n+1. Thus, in the last step (4), the receiver may interpret this as: s_2, s_n+1, s_n+m, i.e. it could view the ‘old’ packet s_2 as containing new data.

Normally, this won’t happen because a ‘wrap around’ occurs only every couple of seconds or minutes even on high bandwidth links. The interval between the original and a unneeded re-transmit will be a lot smaller.

For example,with a transmit speed of 50 Megabytes per second, a
duplicate needs to arrive more than one minute late for this to become a problem. The sequence numbers do not wrap fast enough for small delays to induce this problem.

Once TCP approaches ‘Gigabyte per second’ throughput rates, the sequence numbers can wrap so fast that even a delay by only a few milliseconds can create duplicates that TCP cannot detect anymore. By solving the problem of the too small receive window, TCP can now be used for network speeds that were impossible before – and that creates a new, albeit rare problem. To safely use Gigabytes/s speed in environments with very low RTT receivers must be able to detect such old duplicates without relying on the sequence number alone.

TCP time stamps

A best-before date

In the most simple terms, TCP timestamps just add a time stamp to the packets to resolve the ambiguity caused by very fast sequence number wrap around. If a segment appears to contain new data, but its timestamp is older than the last in-window packet, then the sequence number has wrapped and the ”new” packet is actually an older duplicate. This resolves the ambiguity of re-transmits even for extreme corner cases.

But this extension allows for more than just detection of old packets. The other major feature made possible by TCP timestamps are more precise round-trip time measurements (RTTm).

A need for precise round-trip-time estimation

When both peers support timestamps,  every TCP segment carries two additional numbers: a timestamp value and a timestamp echo.

 
TCP Timestamp option (TSopt): Kind: 8, Length: 10
+-------+----+----------------+-----------------+
|Kind=8 | 10 |TS Value (TSval)|EchoReply (TSecr)|
+-------+----+----------------+-----------------+
    1      1         4                4

An accurate RTT estimate is crucial for TCP performance. TCP automatically re-sends data that was not acknowledged. Re-transmission is triggered by a timer: If it expires, TCP considers one or more packets that it has not yet received an acknowledgment for to be lost. They are then sent again.

But “has not been acknowledged” does not mean the segment was lost. It is also possible that the receiver did not send an acknowledgment so far or that the acknowledgment is still in flight. This creates a dilemma: TCP must wait long enough for such slight delays to not matter, but it can’t wait for too long either.

Low versus high network delay

In networks with a high delay, if the timer fires too fast, TCP frequently wastes time and bandwidth with unneeded re-sends.

In networks with a low delay however,  waiting for too long causes reduced throughput when a real packet loss occurs. Therefore, the timer should expire sooner in low-delay networks than in those with a high delay. The tcp retransmit timeout therefore cannot use a fixed constant value as a timeout. It needs to adapt the value based on the delay that it experiences in the network.

Round-trip time measurement

TCP picks a retransmit timeout that is based on the expected round-trip time (RTT). The RTT is not known in advance. RTT is estimated by measuring the delta between the time a segment is sent and the time TCP receives an acknowledgment for the data carried by that segment.

This is complicated by several factors.

  • For performance reasons, TCP does not generate a new acknowledgment for every packet it receives. It waits  for a very small amount of time: If more segments arrive, their reception can be acknowledged with a single ACK packet. This is called “cumulative ACK”.
  •  The round-trip-time is not constant. This is because of a myriad of factors. For example, a client might be a mobile phone switching to different base stations as its moved around. Its also possible that packet switching takes longer when link or CPU utilization increases.
  • a packet that had to be re-sent must be ignored during computation. This is because the sender cannot tell if the ACK for the re-transmitted segment is acknowledging the original transmission (that arrived after all) or the re-transmission.

This last point is significant: When TCP is busy recovering from a loss, it may only receives ACKs for re-transmitted segments. It then can’t measure (update) the RTT during this recovery phase. As a consequence it can’t adjust the re-transmission timeout, which then keeps growing exponentially. That’s a pretty specific case (it assumes that other mechanisms such as fast retransmit or SACK did not help). Nevertheless, with TCP timestamps, RTT evaluation is done even in this case.

If the extension is used, the peer reads the timestamp value from the TCP segments extension space and stores it locally. It then places this value in all the segments it sends back as the “timestamp echo”.

Therefore the option carries two timestamps: Its senders own timestamp and the most recent timestamp it received from the peer. The “echo timestamp” is used by the original sender to compute the RTT. Its the delta between its current timestamp clock and what was reflected in the “timestamp echo”.

Other timestamp uses

TCP timestamps even have other uses beyond PAWS and RTT measurements. For example it becomes possible to detect if a retransmission was unnecessary. If the acknowledgment carries an older timestamp echo, the acknowledgment was for the initial packet, not the re-transmitted one.

Another, more obscure use case for TCP timestamps is related to the TCP syn cookie feature.

TCP connection establishment on server side

When connection requests arrive faster than a server application can accept the new incoming connection, the connection backlog will eventually reach its limit. This can occur because of a mis-configuration of the system or a bug in the application. It also happens when one or more clients send connection requests without reacting to the ‘syn ack’ response. This fills the connection queue with incomplete connections. It takes several seconds for these entries to time out. This is called a “syn flood attack”.

TCP timestamps and TCP syn cookies

Some TCP stacks allow to accept new connections even if the queue is full. When this happens, the Linux kernel will print a prominent message to the system log:

Possible SYN flooding on port P. Sending Cookies. Check SNMP counters.

This mechanism bypasses the connection queue entirely. The information that is normally stored in the connection queue is encoded into the SYN/ACK responses TCP sequence number. When the ACK comes back, the queue entry can be rebuilt from the sequence number.

The sequence number only has limited space to store information. Connections established using the ‘TCP syn cookie’ mechanism can not support TCP options for this reason.

The TCP options that are common to both peers can be stored in the timestamp, however. The ACK packet reflects the value back in the timestamp echo field which allows to recover the agreed-upon TCP options as well. Else, cookie-connections are restricted by the standard 64 kbyte receive window.

Common myths – timestamps are bad for performance

Unfortunately some guides recommend disabling TCP timestamps to reduce the number of times the kernel needs to access the timestamp clock to get the current time. This is not correct. As explained before, RTT estimation is a necessary part of TCP. For this reason, the kernel always takes a microsecond-resolution time stamp when a packet is received/sent.

Linux re-uses the clock timestamp taken for the RTT estimation for the remainder of the packet processing step. This also avoids the extra clock access to add a timestamp to an outgoing TCP packet.

The entire timestamp option only requires 10 bytes of TCP option space in each packet, this is not a significant decrease in space available for packet payload.

common myths – timestamps are a security problem

Some security audit tools and (older) blog posts recommend to disable TCP
timestamps because they allegedly leak system uptime: This would then allow to estimate the patch level of the system/kernel. This was true in the past: The timestamp clock is based on a constantly increasing value that starts at a fixed value on each system boot. A timestamp value would give a estimate as to how long the machine has been running (uptime).

As of Linux 4.12 TCP timestamps do not reveal the uptime anymore. All timestamp values sent use a peer-specific offset. Timestamp values also wrap every 49 days.

In other words, connections from or to address “A” see a different timestamp than connections to the remote address “B”.

Run sysctl net.ipv4.tcp_timestamps=2 to disable the randomization offset. This makes analyzing packet traces recorded by tools like wireshark or tcpdump easier – packets sent from the host then all have the same clock base in their TCP option timestamp.  For normal operation the default setting should be left as-is.

Selective Acknowledgments

TCP has problems if several packets in the same window of data are lost. This is because TCP Acknowledgments are cumulative, but only for packets
that arrived in-sequence. Example:

  • Sender transmits segments s_1, s_2, s_3, … s_n
  • Sender receives ACK for s_2
  • This means that both s_1 and s_2 were received and the
    sender no longer needs to keep these segments around.
  • Should s_3 be re-transmitted? What about s_4? s_n?

The sender waits for a “retransmission timeout” or ‘duplicate ACKs’ for s_2 to arrive. If a retransmit timeout occurs or several duplicate ACKs for s_2 arrive, the sender transmits s_3 again.

If the sender receives an acknowledgment for s_n, s_3 was the only missing packet. This is the ideal case. Only the single lost packet was re-sent.

If the sender receives an acknowledged segment that is smaller than s_n, for example s_4, that means that more than one packet was lost. The
sender needs to re-transmit the next segment as well.

Re-transmit strategies

Its possible to just repeat the same sequence: re-send the next packet until the receiver indicates it has processed all packet up to s_n. The problem with this approach is that it requires one RTT until the sender knows which packet it has to re-send next. While such strategy avoids unnecessary re-transmissions, it can take several seconds and more until TCP has re-sent the entire window of data.

The alternative is to re-send several packets at once. This approach allows TCP to recover more quickly when several packets have been lost. In the above example TCP re-send s_3, s_4, s_5, .. while it can only be sure that s_3 has been lost.

From a latency point of view, neither strategy is optimal. The first strategy is fast if only a single packet has to be re-sent, but takes too long when multiple packets were lost.

The second one is fast even if multiple packet have to be re-sent, but at the cost of wasting bandwidth. In addition, such a TCP sender could have transmitted new data already while it was doing the unneeded re-transmissions.

With the available information TCP cannot know which packets were lost. This is where TCP Selective Acknowledgments (SACK) come in. Just like window scaling and timestamps, it is another optional, yet very useful TCP feature.

The SACK option

 
   TCP Sack-Permitted Option: Kind: 4, Length 2
   +---------+---------+
   | Kind=4  | Length=2|
   +---------+---------+

A sender that supports this extension includes the “Sack Permitted” option in the connection request. If both endpoints support the extension, then a peer that detects a packet is missing in the data stream can inform the sender about this.

 
   TCP SACK Option: Kind: 5, Length: Variable
                     +--------+--------+
                     | Kind=5 | Length |
   +--------+--------+--------+--------+
   |      Left Edge of 1st Block       |
   +--------+--------+--------+--------+
   |      Right Edge of 1st Block      |
   +--------+--------+--------+--------+
   |                                   |
   /            . . .                  /
   |                                   |
   +--------+--------+--------+--------+
   |      Left Edge of nth Block       |
   +--------+--------+--------+--------+
   |      Right Edge of nth Block      |
   +--------+--------+--------+--------+

A receiver that encounters segment_s2 followed by s_5…s_n, it will include a SACK block when it sends the acknowledgment for s_2:

 
                +--------+-------+
                | Kind=5 |   10  |
+--------+------+--------+-------+
| Left edge: s_5                 |
+--------+--------+-------+------+
| Right edge: s_n                |
+--------+-------+-------+-------+

This tells the sender that segments up to s_2 arrived in-sequence, but it also lets the sender know that the segments s_5 to s_n were also received. The sender can then re-transmit these two packets and proceed to send new data.

The mythical lossless network

In theory SACK provides no advantage if the connection cannot experience packet loss. Or the connection has such a low latency that even waiting one full RTT does not matter.

In practice lossless behavior is virtually impossible to ensure.
Even if the network and all its switches and routers have ample bandwidth and buffer space packets can still be lost:

  • The host operating system might be under memory pressure and drop
    packets. Remember that a host might be handling tens of thousands of packet streams simultaneously.
  • The CPU might not be able to drain incoming packets from the network interface fast enough. This causes packet drops in the network adapter itself.
  • If TCP timestamps are not available even a connection with a very small RTT can stall momentarily during loss recovery.

Use of SACK does not increase the size of TCP packets unless a connection experiences packet loss. Because of this, there is hardly a reason to disable this feature. Almost all TCP stacks support SACK – it is typically only absent on low-power IOT-alike devices that are not doing TCP bulk data transfers.

When a Linux system accepts a connection from such a device, TCP automatically disables SACK for the affected connection.

Summary

The three TCP extensions examined in this post are all related to TCP performance and should best be left to the default setting: enabled.

The TCP handshake ensures that only extensions that are understood by both parties are used, so there is never a need to disable an extension globally just because a peer might not support it.

Turning these extensions off results in severe performance penalties, especially in case of TCP Window Scaling and SACK. TCP timestamps can be disabled without an immediate disadvantage, however there is no compelling reason to do so anymore. Keeping them enabled also makes it possible to support TCP options even when SYN cookies come into effect.

Posted on Leave a comment

install Fedora on a Raspberry Pi 3

Fire up a Raspberry Pi with Fedora.

The Raspberry Pi Foundation has produced quite a few models over the years. This procedure was tested on third generation Pis – a Model B v1.2, and a Model B+ (the older 2 and the new 4 weren’t tested). These are the credit-card size Pis that have been around a few years.

get hardware

You do need a few hardware items, including the Pi. You don’t need any HaT (Hardware Attached on Top) boards or USB antennas. If you have used your Pi in the past, you probably have all these items.

  • current network. Perhaps this is your home lab.
  • ethernet cable. This connects the current network to the Raspberry Pi
  • Raspberry Pi 3, model B or B+.
  • power supply
  • micro-SD card, 8GB or larger.
  • keyboard and video monitor.

The keyboard and video monitor together make up the local console. It’s possible – though complicated – to get by without a console, such as setting up an automated install then connecting over the network. A local console makes it easy to answer the configuration questions during Fedora’s first boot. Also, a mistake during AP configuration may break the network, locking out remote users.

download Fedora Minimal

The Fedora Minimal image, one of Fedora’s alt downloads, has all the core packages and network packages required (well, nearly – check out dnsmasq below). The image contains a ready-made file system, with over 400 packages already installed. This minimal image does not include popular packages like a development environment, Internet service or desktop. These types of software aren’t required for this work, and may well use too much memory if you install them.

The Fedora Minimal raw image fits on a small SD card and runs in less than 1 GB of memory (these old Pis have 1GB RAM).

The name of the downloaded file is something like Fedora-Minimal-32-1.6.aarch64.raw.xz. The file is compressed and is about 700MB in size. When the file is uncompressed, it’s 5GB. It’s an ext4 file system that’s mostly empty – about 1GB is used and 4GB is empty. All that empty space is the reason the compressed download is so much smaller than the uncompressed raw image.

copy to the micro-SD card

  • Copy the image to a micro-SD card.

This can be a more complex than it sounds, and a painful experience. Finding a good micro-SD card takes work. Then there’s the challenge of physically attaching the card to your computer.Perhaps your laptop has a full SD card slot and you need a card adapter, or perhaps you need a USB adapter. Then, when it comes to copying, the OS may either help or get in your way. You may have luck with Fedora Media Writer, or with these Linux commands.

unxz ./Fedora-Minimal-32-1.6.aarch64.raw.xz
dd if=./Fedora-Minimal-32-1.6.aarch64.raw of=/dev/mmcblk0 bs=8M status=progress oflag=direct

set up Fedora

  • Connect the Pi, power cable, network cable and micro-SD card.
  • Hit the power.
  • See the colored box as the graphics chip powers up.
  • Wait for the anaconda installer to start.
  • Answer anaconda’s setup questions.

Initial configuration of the OS takes a few minutes – a couple minutes waiting for boot-up, and a couple to fill out the spokes of anaconda’s text-based installer. In the examples below, the user is named nick and is an administrator (a member of the wheel group).

Congratulations! Your Fedora Pi is up and operational.

update software

  • Update packages with `dnf update`.
  • Reboot the machine with `systemctl reboot`.

Over the years, a lot of people have put a lot of work into making the Raspberry Pi devices work. Use the latest software to make sure you get the benefit of their hard work. If you skip this step, you may find some things just don’t work.

The update downloads and installs about a hundred packages. Since the storage is a micro-SD card, writing new software is a slow process. This is what using computing storage felt like in the 1990s.

things to play with

There are a few other things that can be set up at this point, if you want to play around. It’s all optional. Try things like this.

  • Replace the localhost hostname with the command `sudo hostnamectl set-hostname raspi`.
  • Find the IP address with `ip addr`.
  • Try an SSH login, or even set up key-based login with `ssh-copy-id`.
  • Power down with `systemctl poweroff`.
Posted on Leave a comment

Matthew Arnold: Why I switched to Fedora

To a veteran user of other distributions, Fedora can be a challenge. Many things are not where you expect them to be. The default LVM volume allocations are a bit tricky. And packages including the kernel are frequently upgraded. So why switch after years of using other distributions?

In my case, for a variety of technical and political reasons, Fedora was the best option if I wanted to continue using Linux as my daily driver. If you are making the transition from another distribution, here are some observations and tips to get you started.

Firm foundations

In Fedora you will find a community just as fiercely dedicated to its users and free software as Debian, as fanatical about polish and design as anyone in Ubuntu, and as passionate about learning and discovery as users of Arch or Slackware. Flowing under it all you will find a welcoming community dedicated to technical excellence. The form may change, but underneath all the trappings of systemd, dnf, rpm, and other differences, you will find a thriving healthy and growing community of people who have gathered together to make something awesome. Welcome to Fedora, and I hope you stay awhile.

The best way to get to know the Fedora community is to explore it for yourself. I hope a future article will highlight some of the more interesting aspects of Fedora for newcomers. Below are a few tips that I have put together to help you find your way around a new Fedora installation.

Install and explore

Installation proceeds as you would expect but be aware that you might want to adjust the LVM volume allocations in the install process or shortly afterwards or you might run low on space in a key place unexpectedly! Btrfs is also a supported option that is worth a look if you have lots of small disks.

Freedom matters

As stated above Fedora has a software freedom commitment similar in spirit to that of Debian. This means that you should be able to give Fedora to anyone, anywhere without violating intellectual property laws. Any software which is either not licensed in a way that Fedora finds acceptable or that bares US patent encumbrances can be found in the rpmfusion.org repository.

After the install your next concern is undoubtedly configuring things and installing new packages. Fedora’s command-line package manager is dnf. It works as you would expect.

Note also that since rpm uses file-based dependency tracking instead of package-based dependency tracking, as almost all others do, there are very few traditional metapackages. There are, however, package groups. To get a list of package groups, the command is:

$ dnf group list

To get a list of all installed packages on the system, the command is:

$ rpm -qa

All rpm commands are easily filterable using traditional Unix tools. So you should have no trouble adapting your workflow to the new environment. All the information gathered with the below commands can also be gathered through the dnf command. For information gathering, I prefer to use the rpm command because it presents information in a way that is easily parseable by commands like grep. But if you are making changes to the system, it is easier and safer to use dnf.

To get a package’s version, description, and other metainformation the command is:

$ rpm -qi <packagename>

To list the contents of an installed package the command is:

$ rpm -ql <packagename>

One way in which rpm is easier to use then dpkg or the slack package tools is that rpm stores change log information for each package in the package manager database itself so it is very easy to diagnose whether an update might have broken or changed something unexpectedly. This command is:

$ rpm -q --changes <packagname>

On the kernel

Perhaps one of the most exciting differences between Fedora and other projects, for newcomers at least, is Fedora’s policy on the kernel. Fedora’s policy is to align the distribution’s kernel package life cycle with the upstream mainline kernel life cycle. This means that every Fedora release will have multiple major kernel versions during its lifetime.

This offers several advantages for both users and developers. Primarily, Fedora users are among the first to receive all of the latest drivers, security fixes, new features, etc.

If you do not have an installation that uses out-of-tree modules or custom patches this should not be much of concern to you. However, if you rely on a kernel module like zfs, for example. Rebuilding the filesystem module every 2-3 months can get tedious and error prone after a while. This problem only compounds if you depend upon custom patches for your system to work correctly. There is good news and bad news on this issue.

The good news is that Fedora’s process for building a custom kernel is well documented

The bad news is, as with all things kernel related in all projects, going the custom route means you’re on your own in terms of support. The 2-3 month lifecycle means you’ll be building modules and kernels far more often then you are used to. This may be a deal breaker for some. But even this offers an advantage to the discerning or adventuress user. You will find that being encouraged to rebase your custom kernel setup every two to three months will give you far greater insight into what is going on upstream in mainline Linux and the various out of tree projects you rely on.

Conclusion

Hopefully these tips will get you started exploring and configuring your new Fedora system. Once you have done that. I urge you to explore the community. Like any other free software product of Fedora’s age and size, there are a plethora of communication channels available. You should read the code of conduct and then head over to the communication page on the wiki to get started. As with the distribution itself, for all the differences in culture you will find that much remains the same.

Posted on Leave a comment

Backup and restore Toolboxes

Toolboxes started life often described as disposable containers – and that is still one of their major uses: install stuff, then try it out in the relative safety of a container, and lastly, cleanly dispose of it. Minimal risk, fuss and without pesky residual libraries and applications hanging around on the host long after you have finished.

So — why would you backup a Toolbox? Sometimes, they have more permanent uses, contain complex and lengthy installs, or are being used for critical applications. For example, Toolboxes can be used as a development environment, containing hardware associated drivers and applications. Or they could be used for an application you want to run in a container for which there is no Flatpak, or one that has requirements a Flatpak doesn’t satisfy. While they can be handy to use on Fedora Workstation, toolbox containers are often essential for Silverblue users since they offer an easy solution to installing applications that can’t successfully be installed by rpm-ostree. Or for applications that may not have a Flatpak version readily available. In the above situations a busted Toolbox can be a major headache. But if a backup exists, you can quickly restore a Toolbox or move it to another workstation.

The backup process uses Podman to create an image of an existing toolbox container, and save that image to an archive file. To restore the toolbox container, load the image from the archive file and then create a Toolbox from that image. The new toolbox container will be an identical copy of your backed up toolbox container.

It is important to note this process does not backup data, just what you have installed in the toolbox container. This includes packages installed from repositories or from a local rpm file using dnf. If you need to backup data, Podman’s commit command that will be used to capture an image of the toolbox container, has an option to include volumes attached to the container.

Creating a backup

To backup a toolbox container you will need it’s name and container ID which can be gotten by using toolbox list. For this example I am going to backup my golang development toolbox container, imaginatively named go.

$ toolbox list CONTAINER ID CONTAINER NAME CREATED STATUS IMAGE NAME
00ff783a102f go 5 weeks ago exited registry.fedoraproject.org/f32/fedora-toolbox:32

If the container’s status shows as running , you should stop it using podman container stop container_name. Although the commit command has a -p for pause option, make sure that the Toolbox is not running, which helps it initialize correctly when restored from backup.

$ podman container stop go

To create an image of the toolbox container use

podman container commit -p container_ID backup-image-name

Depending on the complexity of the Toolbox, this can take a little while.

 $ podman container commit -p 00ff783a102f go-backup

Now to confirm the image has been created type…

$ toolbox list

You should get output similar to what is below…

IMAGE ID IMAGE NAME CREATED
cfcb13046db7 localhost/go-backup:latest About a minute ago CONTAINER ID CONTAINER NAME CREATED STATUS IMAGE NAME
00ff783a102f go 5 weeks ago exited registry.fedoraproject.org/f32/fedora-toolbox:32

Now to save the backup image to a tar archive file using podman save -o backup-filename.tar backup-image-name.

$ podman save -o go.tar go-backup

Confirm the archive file, our toolbox container backup, was created.

$ ls go.tar 

Do some tidying up, remove the backup image and, if needed, remove the original Toolbox.

$ podman rmi go-backup $ toolbox rm go

Restore a backup

To create an image from the backup file that was made above, you do it with the command podman load -i backup_filename.

$ podman load -i go.tar

Then you can confirm the image was created with…

$ toolbox list IMAGE ID IMAGE NAME CREATED
cfcb13046db7 localhost/go-backup:latest 17 minutes ago

Now create a toolbox container from the restored image, with toolbox create –container container_name ––image image_name, specifying the full repository and version tag as the image name.

$ toolbox create --container go --image localhost/go-backup:latest

Confirm that the toolbox was created.

$ toolbox list IMAGE ID IMAGE NAME CREATED
cfcb13046db7 localhost/go-backup:latest 20 minutes ago CONTAINER ID CONTAINER NAME CREATED STATUS IMAGE NAME
34cef6b7e28d go 21 seconds ago configured localhost/go-backup:latest

Finally, you can test that the restored Toolbox works…

$ toolbox enter --container go

If you can enter the newly created toolbox container, you will see the toolbox prompt and will have successfully backed up and restored your Pet toolbox container.

Posted on Leave a comment

Real-time noise suppression for video conferencing

With people doing video conferencing all day, good audio has recently become much more important. The best option is obviously a proper audio studio. Unfortunately, this is not something you will always have and you might need to make do with a much simpler setup.

In such situations, a noise reduction filter that keeps your voice but filters out ambient noises (street noise, keyboard, …) can be very helpful. In this article, we will take a look at how to integrate such a filter into PulseAudio so that it can easily be used in all applications with no additional requirements on their part.

Example of switching on noise reduction

The Idea

We set up PulseAudio for live noise-reduction using an LADSPA filter.

This creates a new PulseAudio source which can be used as a virtual microphone. Other applications will not even realize that they are not dealing with physical devices and you can select it as if you had an additional microphone connected.

Terminology

Before we start, it is good to know the following two PulseAudio terms to better understand what we are doing:

  • source – represents a source from which audio can be obtained. Like a microphone
  • sink – represents a consumer of audio like a speaker

Each PulseAudio sink also has a source called monitor which can be used to get the audio put into that sink. For example, you could have audio put out by your headphones while using the monitor of your headphone device to record the output.

Installation

While PulseAudio is usually pre-installed, we need to get the LADSPA filter for noise reduction. You can build and install the filter manually, but it is much easier to install the filter via Fedora Copr:

sudo dnf copr enable -y lkiesow/noise-suppression-for-voice
sudo dnf install -y ladspa-realtime-noise-suppression-plugin

Note that the Copr projects are not maintained and quality-controlled by Fedora directly.

Enable Noise Reduction Filter

First, you need to identify the name of the device you want to apply the noise reduction to. In this example, we’ll use the RODE NT-USB microphone as input.

$ pactl list sources short
0 alsa_input.usb-RODE_Microphones_RODE_NT-USB-00.iec958-stereo …
1 alsa_output.usb-0c76_USB_Headphone_Set-00.analog-stereo.monitor …

Next, we create a new PulseAudio sink, the filter and a loopback between microphone and filter. That way, the output from the microphone is used as input for the noise reduction filter. The output from this filter will then be available via the null sink monitor.

To visualize this, here is the path the audio will travel from the microphone to, for example, a browser:

mic → loopback → ladspa filter → null sink [monitor] → browser

While this sounds complicated, it is set up with just a few simple commands:

pacmd load-module module-null-sink \ sink_name=mic_denoised_out
pacmd load-module module-ladspa-sink \ sink_name=mic_raw_in \ sink_master=mic_denoised_out \ label=noise_suppressor_stereo \ plugin=librnnoise_ladspa \ control=50
pacmd load-module module-loopback \ source=alsa_input.usb-RODE_Microphones_RODE_NT-USB-00.iec958-stereo \ sink=mic_raw_in \ channels=2

That’s it. You should now be able to select the new device.

New recording devices in pavucontrol

Chromium

Unfortunately, browsers based on Chromium will hide monitor devices by default. This means, that we cannot select the newly created noise-reduction device in the browser. One workaround is to select another device first, then use pavucontrol to assign the noise-reduction device afterward.

But if you do this on a regular basis, you can work around the issue by using the remap-source module to convert the null sink monitor to a regular PulseAudio source. The module is actually meant for remapping audio channels – e.g. swapping left and right channel on stereo audio – but we can just ignore these additional capabilities and create a new source similar to the monitor:

pacmd load-module module-remap-source \ source_name=denoised \ master=mic_denoised_out.monitor \ channels=2

The remapped device delivers audio identical to the original one so that assigning this with PulseAudio will yield no difference. But this device does now show up in Chromium:

Remapped monitor device in Chrome

Improvements

While the guide above should help you with all the basics and will get you a working setup, there are a few things you can improve.

But while the commands above should generally work, you might need to experiment with the following suggestions.

Latency

By default, the loopback module will introduce a slight audio latency. You can hear this by running an echo test:

gst-launch-1.0 pulsesrc ! pulsesink

You might be able to reduce this latency by using the latency_msec option when loading the loopback module:

pacmd load-module module-loopback \ latency_msec=1 \ source=alsa_input.usb-RODE_Microphones_RODE_NT- USB-00.iec958-stereo \ sink=mic_raw_in \ channels=2

Voice Threshold

The noise reduction library provides controls for a voice threshold. The filter will return silence if the probability for sound being voice is lower than this threshold. In other words, the higher you set this value, the more aggressive the filter becomes.

You can pass different thresholds to the filter by supplying them as control argument when the ladspa-sink module is being loaded.

pacmd load-module module-ladspa-sink \ sink_name=mic_raw_in \ sink_master=mic_denoised_out \ label=noise_suppressor_stereo \ plugin=librnnoise_ladspa \ control=95

Mono vs Stereo

The example above will work with stereo audio. When working with a simple microphone, you may want to use a mono signal instead.

For switching to mono, use the following values instead when loading the different modules:

  • label=noise_suppressor_mono – when loading the ladspa-sink module
  • channels=1 – when loading the loopback and remap-source modules

Persistence

Using the pacmd command for the setup, settings are not persistent and will disappear if PulseAudio is restarted. You can add these commands to your PulseAudio configuration file if you want them to be persistent. For that, edit ~/.config/pulse/default.pa and add your commands like this:

.include /etc/pulse/default.pa load-module module-null-sink sink_name=mic_denoised_out
load-module module-ladspa-sink …
…

Limitations

If you listen to the example above, you will notice that the filter reliably reduces background noise. But unfortunately, depending on the situation, it can also cause a loss in voice quality.

The following example shows the results with some street noise. Activating the filter reliably removes the noise, but in this example, the voice quality noticeably drops as well:

Noise reduction of constant street noise

As a conclusion, we can say that this can help if you find yourself in less than ideal audio scenarios. It is also very effective if you are not the main speaker in a video conference and you do not want to constantly mute yourself.

Still, good audio equipment and a quiet environment will always be better.

Have fun.

Posted on Leave a comment

Demonstrating Perl with Tic-Tac-Toe, Part 4

This is the final article to the series demonstrating Perl with Tic-Tac-Toe. This article provides a module that can compute better game moves than the previously presented modules. For fun, the modules chip1.pm through chip3.pm can be incrementally moved out of the hal subdirectory in reverse order. With each chip that is removed, the game will become easier to play. The game must be restarted each time a chip is removed.

An example Perl program

Copy and paste the below code into a plain text file and use the same one-liner that was provided in the the first article of this series to strip the leading numbers. Name the version without the line numbers chip3.pm and move it into the hal subdirectory. Use the version of the game that was provided in the second article so that the below chip will automatically load when placed in the hal subdirectory. Be sure to also include both chip1.pm and chip2.pm from the second and third articles, respectively, in the hal subdirectory.

00 # artificial intelligence chip
01 02 package chip3;
03 require chip2;
04 require chip1;
05 06 use strict;
07 use warnings;
08 09 sub moverama {
10 my $game = shift;
11 my @nums = $game =~ /[1-9]/g;
12 my $rama = qr/[1973]/;
13 my %best;
14 15 for (@nums) {
16 my $ra = $_;
17 next unless $ra =~ $rama;
18 $best{$ra} = 0;
19 for (@nums) {
20 my $ma = $_;
21 next unless $ma =~ $rama;
22 if (($ra-$ma)*(10-$ra-$ma)) {
23 $best{$ra} += 1;
24 }
25 }
26 }
27 28 @nums = sort { $best{$b} <=> $best{$a} } keys %best;
29 30 return $nums[0];
31 }
32 33 sub hal_move {
34 my $game = shift;
35 my $mark = shift;
36 my @mark = @{ shift; };
37 my $move;
38 39 $move = chip2::win_move $game, $mark, \@mark;
40 41 if (not defined $move) {
42 $mark = ($mark eq $mark[0]) ? $mark[1] : $mark[0];
43 $move = chip2::win_move $game, $mark, \@mark;
44 }
45 46 if (not defined $move) {
47 $move = moverama $game;
48 }
49 50 if (not defined $move) {
51 $move = chip1::hal_move $game;
52 }
53 54 return $move;
55 }
56 57 sub complain {
58 print 'Just what do you think you\'re doing, ',
59 ((getpwnam($ENV{'USER'}))[6]||$ENV{'USER'}) =~ s! .*!!r, "?\n";
60 }
61 62 sub import {
63 no strict;
64 no warnings;
65 66 my $p = __PACKAGE__;
67 my $c = caller;
68 69 *{ $c . '::hal_move' } = \&{ $p . '::hal_move' };
70 *{ $c . '::complain' } = \&{ $p . '::complain' };
71 72 if (&::MARKS->[0] ne &::HAL9K) {
73 @{ &::MARKS } = reverse @{ &::MARKS };
74 }
75 }
76 77 1;

How it works

Rather than making a random move or making a move based on probability, this final module to the Perl Tic-Tac-Toe game uses a more deterministic algorithm to calculate the best move.

The big takeaway from this Perl module is that it is yet another example of how references can be misused or abused, and as a consequence lead to unexpected program behavior. With the addition of this chip, the computer learns to cheat. Can you figure out how it is cheating? Hints:

  1. Constants are implemented as subroutines.
  2. References allow data to be modified out of scope.

Final notes

Line 12 demonstrates that a regular expression can be pre-compiled and stored in a scalar for later use. This is useful as performance optimization when you intend to re-use the same regular expression many times over.

Line 59 demonstrates that some system library calls are available directly in Perl’s built-in core functionality. Using the built-in functions alleviates some overhead that would otherwise be required to launch an external program and setup the I/O channels to communicate with it.

Lines 72 and 73 demonstrate the use of &:: as a shorthand for &main::.

The full source code for this Perl game can be cloned from the git repository available here: https://pagure.io/tic-tac-toe.git

Posted on Leave a comment

LaTeX typesetting, Part 3: formatting

This series covers basic formatting in LaTeX. Part 1 introduced lists. Part 2 covered tables. In part 3, you will learn about another great feature of LaTeX: the flexibility of granular document formatting. This article covers customizing the page layout, table of contents, title sections, and page style.

Page dimension

When you first wrote your LaTeX document you may have noticed that the default margin is slightly bigger than you may imagine. The margins have to do with the type of paper you specified, for example, a4, letter, and the document class: article, book, report, and so on. To modify the page margins there are a few options, one of the simplest options is using the fullpage package.

This package sets the body of the page such that the page is almost full.

Fullpage package documentation

The illustration below demonstrates the LaTeX default body compared to using the fullpage package.

Another option is to use the geometry package. Before you explore how the geometry package can manipulate margins, first look at the page dimensions as depicted below.

  1. one inch + \hoffset
  2. one inch + \voffset
  3. \oddsidemargin = 31pt
  4. \topmargin = 20pt
  5. \headheight = 12pt
  6. \headsep = 25pt
  7. \textheight = 592pt
  8. \textwidth = 390pt
  9. \marginparsep = 35pt
  10. \marginparwidth = 35pt
  11. \footskip = 30pt

To set the margin to 1 (one) inch using the geometry package use the following example

\usepackage{geometry}
\geometry{a4paper, margin=1in}

In addition to the above example, the geometry command can modify the paper size, and orientation. To change the size of the paper, use the example below:

\usepackage[a4paper, total={7in, 8in}]{geometry}

To change the page orientation, you need to add landscape to the geometry options as shown below:

\usepackage{geometery}
\geometry{a4paper, landscape, margin=1.5in
Landscape Orientation

Table of contents

By default, a LaTeX table of contents is titled “Contents”. There are times when you prefer to relabel the text to be “Table of Content”, change the vertical spacing between the ToC and your first section of chapter, or simply change the color of the text.

To change the text you add the following lines to your preamble, substitute english with your desired language :

\usepackage[english]{babel}
\addto\captionsenglish{
\renewcommand{\contentsname}
{\bfseries{Table of Contents}}}

To manipulate the virtual spacing between ToC and the list of figures, sections, and chapters, use the tocloft package. The two options used in this article are cftbeforesecskip and cftaftertoctitleskip.

The tocloft package provides means of controlling the typographic design of the ToC, List of Figures and List of Tables.

Tocloft package doucmentation

\usepackage{tocloft}
\setlength\ctfbeforesecskip{2pt}
\setlength\cftaftertoctitleskip{30pt}

cftbeforesecskip is the spacing between the sections in the ToC, while
cftaftertoctitleskip is the space between text “Table of Contents” and the first section in the ToC. The below image shows the differences between the default and the modified ToC.

Default ToC
Customized ToC

Borders

When using the package hyperref in your document, LaTeX section lists in the ToC and references including \url have a border, as shown in the images below.

To remove these borders, include the following in the preamble, In the previous section, “Table of Contents,” you will see that there are not any borders in the ToC.

\usepackage{hyperref}
\hypersetup{ pdfborder = {0 0 0}}

Title section

To modify the title section font, style, and/or color, use the package titlesec. In this example, you will change the font size, font style, and font color of the section, subsection, and subsubsection. First, add the following to the preamble.

\usepackage{titlesec}
\titleformat*{\section}{\Huge\bfseries\color{darkblue}}
\titleformat*{\subsection}{\huge\bfseries\color{darkblue}}
\titleformat*{\subsubsection}{\Large\bfseries\color{darkblue}}

Taking a closer look at the code, \titleformat*{\section} specifies the depth of section to use. The above example, uses up to the third depth. The {\Huge\bfseries\color{darkblue}} portion specifies the size of the font, font style and, font color

Page style

To customize the page headers and footers one of the packages, use fancyhdr. This example uses this package to modify the page style, header, and footer. The code below provides a brief description of what each option does.

\pagestyle{fancy} %for header to be on each page
\fancyhead[L]{} %keep left header blank
\fancyhead[C]{} %keep centre header blank
\fancyhead[R]{\leftmark} %add the section/chapter to the header right
\fancyfoot[L]{Static Content} %add static test to the left footer
\fancyfoot[C]{} %keep centre footer blank
\fancyfoot[R]{\thepage} %add the page number to the right footer
\setlength\voffset{-0.25in} %space between page border and header (1in + space)
\setlength\headheight{12pt} %height of the actual header.
\setlength\headsep{25pt} %separation between header and text.
\renewcommand{\headrulewidth}{2pt} % add header horizontal line
\renewcommand{\footrulewidth}{1pt} % add footer horizontal line

The results of this change are shown below:

Header
Footer

Tips

Centralize the preamble

If write many TeX documents, you can create a .tex file with all your preamble based on your document categories and reference this file. For example, I use a structure.tex as shown below.

$ cat article_structure.tex
\usepackage[english]{babel}
\addto\captionsenglish{
\renewcommand{\contentsname}
{\bfseries{\color{darkblue}Table of Contents}}
} % Relable the contents
%\usepackage[margin=0.5in]{geometry} % specifies the margin of the document
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{graphicx} % allows you to add graphics to the document
\usepackage{hyperref} % permits redirection of URL from a PDF document
\usepackage{fullpage} % formate the content to utilise the full page
%\usepackage{a4wide}
\usepackage[export]{adjustbox} % to force image position
%\usepackage[section]{placeins} % to have multiple images in a figure
\usepackage{tabularx} % for wrapping text in a table
%\usepackage{rotating}
\usepackage{multirow}
\usepackage{subcaption} % to have multiple images in a figure
%\usepackage{smartdiagram} % initialise smart diagrams
\usepackage{enumitem} % to manage the spacing between lists and enumeration
\usepackage{fancyhdr} %, graphicx} %for header to be on each page
\pagestyle{fancy} %for header to be on each page
%\fancyhf{}
\fancyhead[L]{}
\fancyhead[C]{}
\fancyhead[R]{\leftmark}
\fancyfoot[L]{Static Content} %\includegraphics[width=0.02\textwidth]{virgin_voyages.png}}
\fancyfoot[C]{} % clear center
\fancyfoot[R]{\thepage}
\setlength\voffset{-0.25in} %Space between page border and header (1in + space)
\setlength\headheight{12pt} %Height of the actual header.
\setlength\headsep{25pt} %Separation between header and text.
\renewcommand{\headrulewidth}{2pt} % adds horizontal line
\renewcommand{\footrulewidth}{1pt} % add horizontal line (footer)
%\renewcommand{\oddsidemargin}{2pt} % adjuct the margin spacing
%\renewcommand{\pagenumbering}{roman} % change the numbering style
%\renewcommand{\hoffset}{20pt}
%\usepackage{color}
\usepackage[table]{xcolor}
\hypersetup{ pdfborder = {0 0 0}} % removes the red boarder from the table of content
%\usepackage{wasysym} %add checkbox
%\newcommand\insq[1]{%
% \Square\ #1\quad%
%} % specify the command to add checkbox
%\usepackage{xcolor}
%\usepackage{colortbl}
%\definecolor{Gray}{gray}{0.9} % create new colour
%\definecolor{LightCyan}{rgb}{0.88,1,1} % create new colour
%\usepackage[first=0,last=9]{lcg}
%\newcommand{\ra}{\rand0.\arabic{rand}}
%\newcolumntype{g}{>{\columncolor{LightCyan}}c} % create new column type g
%\usesmartdiagramlibrary{additions}
%\setcounter{figure}{0}
\setcounter{secnumdepth}{0} % sections are level 1
\usepackage{csquotes} % the proper was of using double quotes
%\usepackage{draftwatermark} % Enable watermark
%\SetWatermarkText{DRAFT} % Specify watermark text
%\SetWatermarkScale{5} % Toggle watermark size
\usepackage{listings} % add code blocks
\usepackage{titlesec} % Manipulate section/subsection
\titleformat{\section}{\Huge\bfseries\color{darkblue}} % update sections to bold with the colour blue \titleformat{\subsection}{\huge\bfseries\color{darkblue}} % update subsections to bold with the colour blue
\titleformat*{\subsubsection}{\Large\bfseries\color{darkblue}} % update subsubsections to bold with the colour blue
\usepackage[toc]{appendix} % Include appendix in TOC
\usepackage{xcolor}
\usepackage{tocloft} % For manipulating Table of Content virtical spacing
%\setlength\cftparskip{-2pt}
\setlength\cftbeforesecskip{2pt} %spacing between the sections
\setlength\cftaftertoctitleskip{30pt} % space between the first section and the text ``Table of Contents''
\definecolor{navyblue}{rgb}{0.0,0.0,0.5}
\definecolor{zaffre}{rgb}{0.0, 0.08, 0.66}
\definecolor{white}{rgb}{1.0, 1.0, 1.0}
\definecolor{darkblue}{rgb}{0.0, 0.2, 0.6}
\definecolor{darkgray}{rgb}{0.66, 0.66, 0.66}
\definecolor{lightgray}{rgb}{0.83, 0.83, 0.83}
%\pagenumbering{roman}

In your articles, refer to the structure.tex file as shown in the example below:

\documentclass[a4paper,11pt]{article}
\input{/path_to_structure.tex}}
\begin{document}
…...
\end{document}

Add watermarks

To enable watermarks in your LaTeX document, use the draftwatermark package. The below code snippet and image demonstrates the how to add a watermark to your document. By default the watermark color is grey which can be modified to your desired color.

\usepackage{draftwatermark} \SetWatermarkText{\color{red}Classified} %add watermark text \SetWatermarkScale{4} %specify the size of the text

Conclusion

In this series you saw some of the basic, but rich features that LaTeX provides for customizing your document to cater to your needs or the audience the document will be presented to. With LaTeX, there are many packages available to customize the page layout, style, and more.

Posted on Leave a comment

SCP user’s migration guide to rsync

As part of the 8.0 pre-release announcement, the OpenSSH project stated that they consider the scp protocol outdated, inflexible, and not readily fixed. They then go on to recommend the use of sftp or rsync for file transfer instead.

Many users grew up on the scp command, however, and so are not familiar with rsync. Additionally, rsync can do much more than just copy files, which can give a beginner the impression that it’s complicated and opaque. Especially when broadly the scp flags map directly to the cp flags while the rsync flags do not.

This article will provide an introduction and transition guide for anyone familiar with scp. Let’s jump into the most common scenarios: Copying Files and Copying Directories.

Copying files

For copying a single file, the scp and rsync commands are effectively equivalent. Let’s say you need to ship foo.txt to your home directory on a server named server.

$ scp foo.txt me@server:/home/me/

The equivalent rsync command requires only that you type rsync instead of scp:

$ rsync foo.txt me@server:/home/me/

Copying directories

For copying directories, things do diverge quite a bit and probably explains why rsync is seen as more complex than scp. If you want to copy the directory bar to server the corresponding scp command looks exactly like the cp command except for specifying ssh information:

$ scp -r bar/ me@server:/home/me/

With rsync, there are more considerations, as it’s a more powerful tool. First, let’s look at the simplest form:

$ rsync -r bar/ me@server:/home/me/

Looks simple right? For the simple case of a directory that contains only directories and regular files, this will work. However, rsync cares a lot about sending files exactly as they are on the host system. Let’s create a slightly more complex, but not uncommon, example.

# Create a multi-level directory structure
$ mkdir -p bar/baz
# Create a file at the root directory
$ touch bar/foo.txt
# Now create a symlink which points back up to this file
$ cd bar/baz
$ ln -s ../foo.txt link.txt
# Return to our original location
$ cd -

We now have a directory tree that looks like the following:

bar
├── baz
│   └── link.txt -> ../foo.txt
└── foo.txt 1 directory, 2 files

If we try the commands from above to copy bar, we’ll notice very different (and surprising) results. First, let’s give scp a go:

$ scp -r bar/ me@server:/home/me/

If you ssh into your server and look at the directory tree of bar you’ll notice an important and subtle difference from your host system:

bar
├── baz
│   └── link.txt
└── foo.txt 1 directory, 2 files

Note that link.txt is no longer a symlink. It is now a full-blown copy of foo.txt. This might be surprising behavior if you’re used to cp. If you did try to copy the bar directory using cp -r, you would get a new directory with the exact symlinks that bar had. Now if we try the same rsync command from before we’ll get a warning:

$ rsync -r bar/ me@server:/home/me/
skipping non-regular file "bar/baz/link.txt"

Rsync has warned us that it found a non-regular file and is skipping it. Because you didn’t tell it to copy symlinks, it’s ignoring them. Rsync has an extensive manual section titled “SYMBOLIC LINKS” that explains all of the possible behavior options available to you. For our example, we need to add the –links flag.

$ rsync -r --links bar/ me@server:/home/me/

On the remote server we see that the symlink was copied over as a symlink. Note that this is different from how scp copied the symlink.

bar/
├── baz
│   └── link.txt -> ../foo.txt
└── foo.txt 1 directory, 2 files

To save some typing and take advantage of more file-preserving options, use the –archive (-a for short) flag whenever copying a directory. The archive flag will do what most people expect as it enables recursive copy, symlink copy, and many other options.

$ rsync -a bar/ me@server:/home/me/

The rsync man page has in-depth explanations of what the archive flag enables if you’re curious.

Caveats

There is one caveat, however, to using rsync. It’s much easier to specify a non-standard ssh port with scp than with rsync. If server was using port 8022 SSH connections, for instance, then those commands would look like this:

$ scp -P 8022 foo.txt me@server:/home/me/

With rsync, you have to specify the “remote shell” command to use. This defaults to ssh. You do so using the -e flag.

$ rsync -e 'ssh -p 8022' foo.txt me@server:/home/me/

Rsync does use your ssh config; however, so if you are connecting to this server frequently, you can add the following snippet to your ~/.ssh/config file. Then you no longer need to specify the port for the rsync or ssh commands!

Host server Port 8022

Alternatively, if every server you connect to runs on the same non-standard port, you can configure the RSYNC_RSH environment variable.

Why else should you switch to rsync?

Now that we’ve covered the everyday use cases and caveats for switching from scp to rsync, let’s take some time to explore why you probably want to use rsync on its own merits. Many people have made the switch to rsync long before now on these merits alone.

In-flight compression

If you have a slow or otherwise limited network connection between you and your server, rsync can spend more CPU cycles to save network bandwidth. It does this by compressing data before sending it. Compression can be enabled with the -z flag.

Delta transfers

Rsync also only copies a file if the target file is different than the source file. This works recursively through directories. For instance, if you took our final bar example above and re-ran that rsync command multiple times, it would do no work after the initial transfer. Using rsync even for local copies is worth it if you know you will repeat them, such as backing up to a USB drive, for this feature alone as it can save a lot of time with large data sets.

Syncing

As the name implies, rsync can do more than just copy data. So far, we’ve only demonstrated how to copy files with rsync. If you instead want rsync to make the target directory look like your source directory, you can add the –delete flag to rsync. The delete flag makes it so rsync will copy files from the source directory which don’t exist on the target directory. Then it will remove files on the target directory which do not exist in the source directory. The result is the target directory is identical to the source directory. By contrast, scp will only ever add files to the target directory.

Conclusion

For simple use cases, rsync is not significantly more complicated than the venerable scp tool. The only significant difference being the use of -a instead of -r for recursive copying of directories. However, as we saw rsync’s -a flag behaves more like cp’s -r flag than scp’s -r flag does.

Hopefully, with these new commands, you can speed up your file transfer workflow!

Posted on Leave a comment

Spam Classification with ML-Pack

Introduction

ML-Pack is a small footprint C++ machine learning library that can be easily integrated into other programs. It is an actively developed open source project and released under a BSD-3 license. Machine learning has gained popularity due to the large amount of electronic data that can be collected. Some other popular machine learning frameworks include TensorFlow, MxNet, PyTorch, Chainer and Paddle Paddle, however these are designed for more complex workflows than ML-Pack. On Fedora, ML-Pack is packaged by its lead developer Ryan Curtin. In addition to a command line interface, ML-Pack has bindings for Python and Julia. Here, we will focus on the command line interface since this may be useful for system administrators to integrate into their workflows.

Installation

You can install ML-Pack on the Fedora command line using

$ sudo dnf -y install mlpack mlpack-bin

You can also install the documentation, development headers and Python bindings by using …

$ sudo dnf -y install mlpack-doc \
mlpack-devel mlpack-python3

though they will not be used in this introduction.

Example

As an example, we will train a machine learning model to classify spam SMS messages. To keep this article brief, linux commands will not be fully explained, but you can find out more about them by using the man command, for example for the command first command used below, wget

$ man wget

will give you information that wget will download files from the web and options you can use for it.

Get a dataset

We will use an example spam dataset in Indonesian provided by Yudi Wibisono

 
$ wget https://drive.google.com/file/d/1-stKadfTgJLtYsHWqXhGO3nTjKVFxm_Q/view
$ unzip dataset_sms_spam_bhs_indonesia_v1.zip

Pre-process dataset

We will try to classify a message as spam or ham by the number of occurrences of a word in a message. We first change the file line endings, remove line 243 which is missing a label and then remove the header from the dataset. Then, we split our data into two files, labels and messages. Since the labels are at the end of the message, the message is reversed and then the label removed and placed in one file. The message is then removed and placed in another file.

$ tr 'r' 'n' < dataset_sms_spam_v1.csv > dataset.txt
$ sed '243d' dataset.txt > dataset1.csv
$ sed '1d' dataset1.csv > dataset.csv
$ rev dataset.csv | cut -c1 | rev > labels.txt
$ rev dataset.csv | cut -c2- | rev > messages.txt
$ rm dataset.csv
$ rm dataset1.csv
$ rm dataset.txt

Machine learning works on numeric data, so we will use labels of 1 for ham and 0 for spam. The dataset contains three labels, 0, normal sms (ham), 1, fraud (spam), and 2 promotion (spam). We will label all spam as 1, so promotions and fraud will be labelled as 1.

$ tr '2' '1' < labels.txt > labels.csv
$ rm labels.txt

The next step is to convert all text in the messages to lower case and for simplicity remove punctuation and any symbols that are not spaces, line endings or in the range a-z (one would need expand this range of symbols for production use)

$ tr '[:upper:]' '[:lower:]' < \
messages.txt > messagesLower.txt
$ tr -Cd 'abcdefghijklmnopqrstuvwxyz n' < \ messagesLower.txt > messagesLetters.txt
$ rm messagesLower.txt

We now obtain a sorted list of unique words used (this step may take a few minutes, so use nice to give it a low priority while you continue with other tasks on your computer).

$ nice -20 xargs -n1 < messagesLetters.txt > temp.txt
$ sort temp.txt > temp2.txt
$ uniq temp2.txt > words.txt
$ rm temp.txt
$ rm temp2.txt

We then create a matrix, where for each message, the frequency of word occurrences is counted (more on this on Wikipedia, here and here). This requires a few lines of code, so the full script, which should be saved as ‘makematrix.sh’ is below

#!/bin/bash
declare -a words=()
declare -a letterstartind=()
declare -a letterstart=()
letter=" "
i=0
lettercount=0
while IFS= read -r line; do labels[$((i))]=$line let "i++"
done < labels.csv
i=0
while IFS= read -r line; do words[$((i))]=$line firstletter="$( echo $line | head -c 1 )" if [ "$firstletter" != "$letter" ] then letterstartind[$((lettercount))]=$((i)) letterstart[$((lettercount))]=$firstletter letter=$firstletter let "lettercount++" fi let "i++"
done < words.txt
letterstartind[$((lettercount))]=$((i))
echo "Created list of letters" touch wordfrequency.txt
rm wordfrequency.txt
touch wordfrequency.txt
messagecount=0
messagenum=0
messages="$( wc -l messages.txt )"
i=0
while IFS= read -r line; do let "messagenum++" declare -a wordcount=() declare -a wordarray=() read -r -a wordarray <<> wordfrequency.txt echo "Processed message ""$messagenum" let "i++"
done < messagesLetters.txt
# Create csv file
tr ' ' ',' data.csv

Since Bash is an interpreted language, this simple implementation can take upto 30 minutes to complete. If using the above Bash script on your primary workstation, run it as a task with low priority so that you can continue with other work while you wait:

$ nice -20 bash makematrix.sh

Once the script has finished running, split the data into testing (30%) and training (70%) sets:

$ mlpack_preprocess_split \ --input_file data.csv \ --input_labels_file labels.csv \ --training_file train.data.csv \ --training_labels_file train.labels.csv \ --test_file test.data.csv \ --test_labels_file test.labels.csv \ --test_ratio 0.3 \ --verbose

Train a model

Now train a Logistic regression model:

$ mlpack_logistic_regression \
--training_file train.data.csv \
--labels_file train.labels.csv --lambda 0.1 \
--output_model_file lr_model.bin

Test the model

Finally we test our model by producing predictions,

$ mlpack_logistic_regression \
--input_model_file lr_model.bin \ --test_file test.data.csv \
--output_file lr_predictions.csv

and comparing the predictions with the exact results,

$ export incorrect=$(diff -U 0 lr_predictions.csv \
test.labels.csv | grep '^@@' | wc -l)
$ export tests=$(wc -l < lr_predictions.csv)
$ echo "scale=2; 100 * ( 1 - $((incorrect)) \
/ $((tests)))" | bc

This gives approximately 90% validation rate, similar to that obtained here.

The dataset is composed of approximately 50% spam messages, so the validation rates are quite good without doing much parameter tuning. In typical cases, datasets are unbalanced with many more entries in some categories than in others. In these cases a good validation rate can be obtained by mispredicting the class with a few entries. Thus to better evaluate these models, one can compare the number of misclassifications of spam, and the number of misclassifications of ham. Of particular importance in applications is the number of false positive spam results as these are typically not transmitted. The script below produces a confusion matrix which gives a better indication of misclassification. Save it as ‘confusion.sh’

#!/bin/bash
declare -a labels
declare -a lr
i=0
while IFS= read -r line; do labels[i]=$line let "i++"
done < test.labels.csv
i=0
while IFS= read -r line; do lr[i]=$line let "i++"
done < lr_predictions.csv
TruePositiveLR=0
FalsePositiveLR=0
TrueZerpLR=0
FalseZeroLR=0
Positive=0
Zero=0
for i in "${!labels[@]}"; do if [ "${labels[$i]}" == "1" ] then let "Positive++" if [ "${lr[$i]}" == "1" ] then let "TruePositiveLR++" else let "FalseZeroLR++" fi fi if [ "${labels[$i]}" == "0" ] then let "Zero++" if [ "${lr[$i]}" == "0" ] then let "TrueZeroLR++" else let "FalsePositiveLR++" fi fi done
echo "Logistic Regression"
echo "Total spam" $Positive
echo "Total ham" $Zero
echo "Confusion matrix"
echo " Predicted class"
echo " Ham | Spam "
echo " ---------------"
echo " Actual| Ham | " $TrueZeroLR "|" $FalseZeroLR
echo " class | Spam | " $FalsePositiveLR " |" $TruePositiveLR
echo ""

then run the script

$ bash confusion.sh

You should get output similar to

Logistic Regression
Total spam 183
Total ham 159
Confusion matrix

    Predicted class
    Ham Spam
Actual class Ham 128 26
Spam 31 157

which indicates a reasonable level of classification. Other methods you can try in ML-Pack for this problem include Naive Bayes, random forest, decision tree, AdaBoost and perceptron.

To improve the error rating, you can try other pre-processing methods on the initial data set. Neural networks can give upto 99.95% validation rates, see for example here, here and here. However, using these techniques with ML-Pack cannot be done on the command line interface at present and is best covered in another post.

For more on ML-Pack, please see the documentation.

Posted on Leave a comment

How to configure an SSH proxy server with Squid

Sometimes you can’t connect to an SSH server from your current location. Other times, you may want to add an extra layer of security to your SSH connection. In these cases connecting to another SSH server via a proxy server is one way to get through.

Squid is a full-featured proxy server application that provides caching and proxy services. It’s normally used to help improve response times and reduce network bandwidth by reusing and caching previously requested web pages during browsing.

However for this setup you’ll configure Squid to be used as an SSH proxy server since it’s a robust trusted proxy server that is easy to configure.

Installation and configuration

Install the squid package using sudo:

$ sudo dnf install squid -y

The squid configuration file is quite extensive but there are only a few things we need to configure. Squid uses access control lists to manage connections.

Edit the /etc/squid/squid.conf file to make sure you have the two lines explained below.

First, specify your local IP network. The default configuration file already has a list of the most common ones but you will need to add yours if it’s not there. For example, if your local IP network range is 192.168.1.X, this is how the line would look:

acl localnet src 192.168.1.0/24

Next, add the SSH port as a safe port by adding the following line:

acl Safe_ports port 22

Save that file. Now enable and restart the squid proxy service:

$ sudo systemctl enable squid
$ sudo systemctl restart squid

4.) By default squid proxy listens on port 3128. Configure firewalld to allow for this:

$ sudo firewall-cmd --add-service=squid --perm
$ sudo firewall-cmd --reload

Testing the ssh proxy connection

To connect to a server via ssh through a proxy server we’ll be using netcat.

Install nmap-ncat if it’s not already installed:

$ sudo dnf install nmap-ncat -y

Here is an example of a standard ssh connection:

$ ssh user@example.com

Here is how you would connect to that same server using the squid proxy server as a gateway.

This example assumes the squid proxy server’s IP address is 192.168.1.63. You can also use the host-name or the FQDN of the squid proxy server:

$ ssh user@example.com -o "ProxyCommand nc --proxy 192.168.1.63:3128 %h %p"

Here are the meanings of the options:

  • ProxyCommand – Tells ssh a proxy command is going to be used.
  • nc – The command used to establish the connection to the proxy server. This is the netcat command.
  • %h – The placeholder for the proxy server’s host-name or IP address.
  • %p – The placeholder for the proxy server’s port number.

There are many ways to configure an SSH proxy server but this is a simple way to get started.