The Fedora Project, through the Marketing team, is happy to announce the first FedoraShareYourScreen week!
We know that even though the stock look of Fedora Linux is awesome, most people love to tweak and adapt their systems to their own workflow. We want to see how your Fedora Linux desktop looks.
FedoraShareYourScreen week
Share your screen with us! Take a screenshot of your desktop and share it. Use the hashtag #FedoraShareYourScreen and mention @fedora on Twitter or @thefedoraproject on Instagram. For Mastodon, just use the hashtag. Avoid showing personal and private info.
If you use a full Desktop Environment, just a Window Manager, or just the command line, we want to see how it looks! Share your favorite apps, configs, plugins, widgets and everything on your desktop (including your favorite wallpapers if they are SFW ).
At the end of the week we will be publishing a slide show on YouTube with all the screens collected during the week! Keep it Family Friendly, inappropriate content won’t be included in the video.
Feel proud of your customization and show it to us! From January 31st to February 6th we will be looking, commenting and sharing feedback on the screenshots shared with the hashtag #FedoraShareYourScreen on Twitter, Instagram and Mastodon!
When is this week?
It will start this on January 31st and it will end on February 6th. We will collect all the screenshots on February 7th and the slide show will be published on February 10th.
Will this happen again?
Of course! We want to see everyone’s ideas with all the new stuff that Fedora Linux adds each release. We will be doing this in the middle of each Fedora Linux release. This will give everyone time to customize the desktop and show it in all it’s shininess!
You do not want someone else to be able to monitor or even control your computer and you usually work hard to cut off any such attempts using various security mechanisms. However, sometimes a situation occurs when you desperately need a friend, or an expert, to help you with a computer problem, but they are not at the same location at the same time. How do you show them? Should you take your mobile phone, take pictures of your screen, and send it to them? Should you record a video? Certainly not. You can share your screen with them and possibly let them control your computer remotely for a while. In this article, I will describe how to allow sharing the computer screen in Gnome.
Setting up the server to share its screen
A server is a computer that provides (serves) some content that other computers (clients) will consume. In this article the server runs Fedora Workstation with the standard Gnome desktop.
Switching on Gnome Screen Sharing
By default, the ability to share the computer screen in Gnome is off. In order to use it, you need to switch it on:
Start Gnome Control Center.
Click on the Sharing tab.
Switch on sharing with the slider in the upper right corner.
Click on Screen sharing.
Switch on screen sharing using the slider in the upper left corner of the window.
Check the Allow connections to control the screen if you want to be able to control the screen from the client. Leaving this button unchecked will only allow view-only access to the shared screen.
If you want to manually confirm all incoming connections, select New connections must ask for access.
If you want to allow connections to people who know a password (you will not be notified), select Require a password and fill in the password. The password can only be 8 characters long.
Check Show password to see what the current password is. For a little more protection, do not use your login password here, but choose a different one.
If you have more networks available, you can choose on which one the screen will be accessible.
Setting up the client to display a remote screen
A client is a computer that connects to a service (or content) provided by a server. This demo will also run Fedora Workstation on the client, but the operating system actually should not matter too much, if it runs a decent VNC client.
Check for visibility
Sharing the computer screen in Gnome between the server and the client requires a working network connection and a visible “route” between them. If you cannot make such a connection, you will not be able to view or control the shared screen of the server anyway and the whole process described here will not work.
To make sure a connection exists
Find out the IP address of the server.
Start Gnome Control Center, a.k.a Settings. Use the Menu in the upper right corner, or the Activities mode. When in Activities, type
settings
and click on the corresponding icon.
Select the Network tab.
Click on the Settings button (cogwheel) to display your network profile’s parameters.
Open the Details tab to see the IP address of your computer.
Go to your client’s terminal (the computer from which you want to connect) and find out if there is a connection between the client and the server using the ping command.
$ ping -c 5 192.168.122.225
Examine the command’s output. If it is similar to the example below, the connection between the computers exists.
PING 192.168.122.225 (192.168.122.225) 56(84) bytes of data. 64 bytes from 192.168.122.225: icmp_seq=1 ttl=64 time=0.383 ms 64 bytes from 192.168.122.225: icmp_seq=2 ttl=64 time=0.357 ms 64 bytes from 192.168.122.225: icmp_seq=3 ttl=64 time=0.322 ms 64 bytes from 192.168.122.225: icmp_seq=4 ttl=64 time=0.371 ms 64 bytes from 192.168.122.225: icmp_seq=5 ttl=64 time=0.319 ms --- 192.168.122.225 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4083ms rtt min/avg/max/mdev = 0.319/0.350/0.383/0.025 ms
You will probably experience no problems if both computers live on the same subnet, such as in your home or at the office, but problems might occur, when your server does not have a public IP address and cannot be seen from the external Internet. Unless you are the only administrator of your Internet access point, you will probably need to consult about your situation with your administrator or with your ISP. Note, that exposing your computer to the external Internet is always a risky strategy and you must pay enough attention to protecting your computer from unwanted access.
Install the VNC client (Remmina)
Remmina is a graphical remote desktop client that can you can use to connect to a remote server using several protocols, such as VNC, Spice, or RDP. Remmina is available from the Fedora repositories, so you can installed it with both the dnf command or the Software, whichever you prefer. With dnf, the following command will install the package and several dependencies.
$ sudo dnf install remmina
Connect to the server
If there is a connection between the server and the client, make sure the following is true:
The computer is running.
The Gnome session is running.
The user with screen sharing enabled is logged in.
The session is not locked, i.e. the user can work with the session.
Then you can attempt to connect to the session from the client:
Start Remmina.
Select the VNC protocol in the dropdown menu on the left side of the address bar.
Type the IP address of the server into the address bar and hit Enter.
When the connection starts, another connection window opens. Depending on the server settings, you may need to wait until the server user allows the connection, or you may have to provide the password.
Type in the password and press OK.
Press to resize the connection window to match the server resolution, or press to resize the connection window over your entire desktop. When in fullscreen mode, notice the narrow white bar at the upper edge of the screen. That is the Remmina menu and you can access it by moving the mouse to it when you need to leave the fullscreen mode or change some of the settings.
When you return back to the server, you will notice that there is now a yellow icon in the upper bar which indicates that you are sharing the computer screen in Gnome. If you no longer wish to share the screen, you can enter the menu and click on Screen is being shared and then on select Turn off to stop sharing the screen immediately.
Terminating the screen sharing when session locks.
By default, the connection will always terminate when the session locks. A new connection cannot be established until the session is unlocked.
On one hand, this sounds logical. If you want to share your screen with someone, you might not want them to use your computer when you are not around. On the other hand, the same approach is not very useful, if you want to control your own computer from a remote location, be it your bed in another room or your mother-in-law’s place. There are two options available to deal with this problem. You can either disable locking the screen entirely or you can use a Gnome extension that supports unlocking the session via the VNC connection.
Disable screen lock
In order to disable the screen lock:
Open the Gnome Control Center.
Click on the Privacy tab.
Select the Screen Lock settings.
Switch off Automatic Screen Lock.
Now, the session will never lock (unless you lock it manually), so it will be possible to start a VNC connection to it.
Use a Gnome extension to allow unlocking the session remotely.
If you do not want to switch off locking the screen or you want to have an option to unlock the session remotely even when it is locked, you will need to install an extension that provides this functionality as such behavior is not allowed by default.
In the upper part of the page, find an info block that tells you to install GNOME Shell integration for Firefox.
Install the Firefox extension by clicking on Click here to install browser extension.
After the installation, notice the Gnome logo in the menu part of Firefox.
Click on the Gnome logo to navigate back to the extension page.
Search for allow locked remote desktop.
Click on the displayed item to go to the extension’s page.
Switch the extension ON by using the on/off button on the right.
Now, it will be possible to start a VNC connection any time. Note, that you will need to know the session password to unlock the session. If your VNC password differs from the session password, your session is still protected a little.
Conclusion
This article, described the way to enable sharing the computer screen in Gnome. It mentioned the difference between the limited (view-only) access or not limited (full) access. This solution, however, should in no case be considered a correct approach to enable a remote access for serious tasks, such as administering a production server. Why?
The server will always keep its control mode. Anyone working with the server session will be able to control the mouse and keyboard.
If the session is locked, unlocking it from the client will also unlock it on the server. It will also wake up the display from the stand-by mode. Anybody who can see your server screen will be able to watch what you are doing at the moment.
The VNC protocol per se is not encrypted or protected so anything you send over this can be compromised.
There are several ways, you can set up a protected VNC connection. You could tunnel it via the SSH protocol for better security, for example. However, these are beyond the scope of this article.
Disclaimer: The above workflow worked without problems on Fedora 35 using several virtual machines. If it does not work for you, then you might have hit a bug. Please, report it.
So what is Mutiny? Mutiny allows streaming of objects in an event driven flow. The stream might originate from a local process or something remote like a database. Mutiny streaming is accomplished by either a Uni or a Multi object. We are using the Uni to stream one object — a List containing many integers. A subscribe pattern initiates the stream.
A traditional program is executed and results are returned before continuing. Mutiny can easily support non-blocking code to run processes concurrently. RxJava, ReactiveX and even native Java are alternatives. Mutiny is easy to use (the exposed API is minimal) and it is the default in many of the Quarkus extensions. The two extensions used are quarkus-mutiny and quarkus-vertx. Vert.x is the underlying framework wrapped by Quarkus. The Promise classes are supplied by quarkus-vertx. A promise returns a Uni stream when the process is complete. To get started, install a Java JDK and Maven.
Bootstrap
The minimum requirement is either Java-11 or Java-17 with Maven.
With Java-11:
$ sudo dnf install -y java-11-openjdk-devel maven
With Java-17:
$ sudo dnf install -y java-17-openjdk-devel maven
BootstrapQuarkusand Mutiny with the Maven call below. The extension quarkus-vertx is not included to demonstrate how to add additional extensions. Locate an appropriate directory before executing. The directory mutiny-demo will be created with the initial application.
TheclassName entry on the Quarkus bootstrap is org.demo.mag.Startup which creates the file src/main/java/org/demo/map/Startup.java. Replace the contents with the following code:
package org.demo.mag; import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.function.IntSupplier;
import java.util.stream.Collectors;
import java.util.stream.IntStream; import io.quarkus.runtime.Quarkus;
import io.quarkus.runtime.QuarkusApplication;
import io.quarkus.runtime.annotations.QuarkusMain;
import io.smallrye.mutiny.Uni;
import io.smallrye.mutiny.tuples.Tuple2;
import io.vertx.mutiny.core.Promise; @QuarkusMain
public class Startup implements QuarkusApplication { public static void main(String... args) { Quarkus.run(Startup.class, args); } @Override public int run(String... args) throws InterruptedException, ExecutionException { final Promise<String> finalMessage = Promise.promise(); final String elapsedTime = "Elapsed time for asynchronous method: %d milliseconds"; final int[] syncResults = {0}; Application.runTraditionalMethod(); final Long millis = System.currentTimeMillis(); Promise<List<Integer>> promiseRange = Application.getRange(115000); Promise<Tuple2<Promise<List<Integer>>, Promise<List<Integer>>>> promiseCombined = Application.getCombined(10000, 15000); Promise<List<Integer>> promiseReverse = Application.getReverse(24000); /* * Retrieve the Uni stream and on the complete event obtain the List<Integer> */ promiseRange.future().onItem().invoke(list -> { System.out.println("Primes Range: " + list.size()); if(syncResults[0] == 1) { finalMessage.complete(String.format(elapsedTime, System.currentTimeMillis() - millis)); } { syncResults[0] = 2; } return; }).subscribeAsCompletionStage(); promiseReverse.future().onItem().invoke(list -> { System.out.println("Primes Reverse: " + list.size()); return; }).subscribeAsCompletionStage(); /* * Notice that this finishes before the other two prime generators(smaller lists). */ promiseCombined.future().onItem().invoke(p -> { /* * Notice that "Combined Range" displays first */ p.getItem2().future().invoke(reverse -> { System.out.println("Combined Reverse: " + reverse.size()); return; }).subscribeAsCompletionStage(); p.getItem1().future().invoke(range -> { System.out.println("Combined Range: " + range.size()); /* * Nesting promises to get multple results together */ p.getItem2().future().invoke(reverse -> { System.out.println(String.format("Asserting that expected primes are equal: %d -- %d", range.get(0), reverse.get(reverse.size() - 1))); assert range.get(0) == reverse.get(reverse.size() - 1) : "Generated primes incorrect"; if(syncResults[0] == 2) { finalMessage.complete(String.format(elapsedTime, System.currentTimeMillis() - millis)); } else { syncResults[0] = 1; } return; }).subscribeAsCompletionStage(); return; }).subscribeAsCompletionStage(); return; }).subscribeAsCompletionStage(); // Note: on very fast machines this may not display first. System.out.println("This should display first - indicating asynchronous code."); // blocking for final message String elapsedMessage = finalMessage.futureAndAwait(); System.out.println(elapsedMessage); return 0; } public static class Application { public static Promise<List<Integer>> getRange(int n) { final Promise<List<Integer>> promise = Promise.promise(); // non-blocking - this is only for demonstration(emulating some remote call) new Thread(() -> { try { /* * RangeGeneratedPrimes.primes is blocking, only returns when done */ promise.complete(RangeGeneratedPrimes.primes(n)); } catch (Exception exception) { Thread.currentThread().interrupt(); } }).start(); return promise; } public static Promise<List<Integer>> getReverse(int n) { final Promise<List<Integer>> promise = Promise.promise(); new Thread(() -> { try { // Generating a new object stream promise.complete(ReverseGeneratedPrimes.primes(n)); } catch (Exception exception) { Thread.currentThread().interrupt(); } }).start(); return promise; } public static Promise<Tuple2<Promise<List<Integer>>, Promise<List<Integer>>>> getCombined(int ran, int rev) { final Promise<Tuple2<Promise<List<Integer>>, Promise<List<Integer>>>> promise = Promise.promise(); new Thread(() -> { try { Uni.combine().all() /* * Notice that these are running concurrently */ .unis(Uni.createFrom().item(Application.getRange(ran)), Uni.createFrom().item(Application.getReverse(rev))) .asTuple().onItem().call(tuple -> { promise.complete(tuple); return Uni.createFrom().nullItem(); }) .onFailure().invoke(Throwable::printStackTrace) .subscribeAsCompletionStage(); } catch (Exception exception) { Thread.currentThread().interrupt(); } }).start(); return promise; } public static void runTraditionalMethod() { Long millis = System.currentTimeMillis(); System.out.println("Traditiona1-1: " + RangeGeneratedPrimes.primes(115000).size()); System.out.println("Traditiona1-2: " + RangeGeneratedPrimes.primes(10000).size()); System.out.println("Traditiona1-3: " + ReverseGeneratedPrimes.primes(15000).size()); System.out.println("Traditiona1-4: " + ReverseGeneratedPrimes.primes(24000).size()); System.out.println(String.format("Elapsed time for traditional method: %d milliseconds\n", System.currentTimeMillis() - millis)); } } public interface Primes { static List<Integer> primes(int n) { return null; }; } public abstract static class PrimeBase { static boolean isPrime(int number) { return IntStream.rangeClosed(2, (int) (Math.sqrt(number))) .allMatch(n -> number % n != 0); } } public static class RangeGeneratedPrimes extends PrimeBase implements Primes { public static List<Integer> primes(int n) { return IntStream.rangeClosed(2, n) .filter(x -> isPrime(x)).boxed() .collect(Collectors.toList()); } } public static class ReverseGeneratedPrimes extends PrimeBase implements Primes { public static List<Integer> primes(int n) { List<Integer> list = IntStream.generate(getReverseList(n)).limit(n - 1) .filter(x -> isPrime(x)).boxed() .collect(Collectors.toList()); return list; } private static IntSupplier getReverseList(int startValue) { IntSupplier reverse = new IntSupplier() { private int start = startValue; public int getAsInt() { return this.start--; } }; return reverse; } }
}
Testing
The Quarkus install showcases the quarkus-resteasy extension by default. We are not using it, replace the contents of src/test/java/org/demo/mag/StartupTest.java with:
The next step is to build the project. This includes downloading all dependencies as well as compiling and executing the Startup.java program. Everything is included in one file for brevity.
$ ./gradlew quarkusDev
The above command produces a banner and console output from Quarkus and the program.
This is development mode. Notice the prompt: “Press [space] to restart”. To review edits hit the space-bar and enter-key to re-compile and execute. Enter q to quit.
To build an Uber jar (all dependencies included) execute:
Traditional-1: 9592 Traditional-2: 1229 Traditional-3: 2262 Traditional-4: 2762 Elapsed time for traditional method: 67 milliseconds Combined Range: 1229 This should display first - indicating asynchronous code. Combined Reverse: 2262 Primes Reverse: 2762 Asserting that expected primes are equal: 2 -- 2 Primes Range: 9592 Elapsed time for asynchronous method: 52 milliseconds
You will still get the banner and logs in development mode.
To go one step further, Quarkus can generate an executable out of the box using GraalVM.
$ ./gradlew build -Dquarkus.package.type=native
The executable generated by the above command will be ./build/mutiny-demo-1.0.0-runner.
The default GraalVM is a downloaded container. To override this, set the environment variable GRAALVM_HOME to your local install. Don’t forget to install the native-image with the following command.
$ ${GRAALVM_HOME}/bin/gu install native-image
The Code
The code, generates prime numbers for a range, reversed on a limit and a combination of the two. For example, consider the range: “Promise<List<Integer>> promiseRange = Application.getRange(115000);”.
This generates all primes between 1 and 115000 and displays the number of primes in the range. It is executed first but displays its results last. The code near the end of the main method — System.out.println (“This should display first – indicating asynchronous code.”);— displays first. This is an example of asynchronous code. We can run multiple processes concurrently. However, the order of completion is unpredictable. The traditional calls are orderly and the results can be collected when completed.
Execution can be blocked until a result is returned. The code does exactly that to display the asynchronous elapsed time message. At the end of the main method we have: “String elapsedMessage = finalMessage.futureAndAwait();”. The message arrives from either promiseRange or promiseCombined — the two longest running processes. But even this is not guaranteed. The state of the underling OS is unknown. One of the other processes might finish last. Normally, asynchronous calls are nested to co-ordinate results. This is demonstrated in the promiseCombined promise to evaluate the results of range and reversed primes.
Conclusion
The comparison between the traditional method and asynchronous method suggests that the asynchronous method can be up to 25% faster on a modern computer. An older CPU that does not have the resources and computing power produces results faster with the traditional method. If a computer has many cores, why not use them‽
More documentation can be found on the following web sites.
The first post in a series about network address translation (NAT). Part 1 shows how to use the iptables/nftables packet tracing feature to find the source of NAT related connectivity problems.
Introduction
Network address translation is one way to expose containers or virtual machines to the wider internet. Incoming connection requests have their destination address rewritten to a different one. Packets are then routed to a container or virtual machine instead. The same technique can be used for load-balancing where incoming connections get distributed among a pool of machines.
Connection requests fail when network address translation is not working as expected. The wrong service is exposed, connections end up in the wrong container, request time out, and so on. One way to debug such problems is to check that the incoming request matches the expected or configured translation.
Connection tracking
NAT involves more than just changing the ip addresses or port numbers. For instance, when mapping address X to Y, there is no need to add a rule to do the reverse translation. A netfilter system called “conntrack” recognizes packets that are replies to an existing connection. Each connection has its own NAT state attached to it. Reverse translation is done automatically.
Ruleset evaluation tracing
The utility nftables (and, to a lesser extent, iptables) allow for examining how a packet is evaluated and which rules in the ruleset were matched by it. To use this special feature “trace rules” are inserted at a suitable location. These rules select the packet(s) that should be traced. Lets assume that a host coming from IP address C is trying to reach the service on address S and port P. We want to know which NAT transformation is picked up, which rules get checked and if the packet gets dropped somewhere.
Because we are dealing with incoming connections, add a rule to the prerouting hook point. Prerouting means that the kernel has not yet made a decision on where the packet will be sent to. A change to the destination address often results in packets to get forwarded rather than being handled by the host itself.
Initial setup
# nft 'add table inet trace_debug'
# nft 'add chain inet trace_debug trace_pre { type filter hook prerouting priority -200000; }'
# nft "insert rule inet trace_debug trace_pre ip saddr $C ip daddr $S tcp dport $P tcp flags syn limit rate 1/second meta nftrace set 1"
The first rule adds a new table This allows easier removal of the trace and debug rules later. A single “nft delete table inet trace_debug” will be enough to undo all rules and chains added to the temporary table during debugging.
The second rule creates a base hook before routing decisions have been made (prerouting) and with a negative priority value to make sure it will be evaluated before connection tracking and the NAT rules.
The only important part, however, is the last fragment of the third rule: “meta nftrace set 1″. This enables tracing events for all packets that match the rule. Be as specific as possible to get a good signal-to-noise ratio. Consider adding a rate limit to keep the number of trace events at a manageable level. A limit of one packet per second or per minute is a good choice. The provided example traces all syn and syn/ack packets coming from host $C and going to destination port $P on the destination host $S. The limit clause prevents event flooding. In most cases a trace of a single packet is enough.
The procedure is similar for iptables users. An equivalent trace rule looks like this:
# iptables -t raw -I PREROUTING -s $C -d $S -p tcp --tcp-flags SYN SYN --dport $P -m limit --limit 1/s -j TRACE
Obtaining trace events
Users of the native nft tool can just run the nft trace mode:
# nft monitor trace
This prints out the received packet and all rules that match the packet (use CTRL-C to stop it):
trace id f0f627 ip raw prerouting packet: iif "veth0" ether saddr ..
We will examine this in more detail in the next section. If you use iptables, first check the installed version via the “iptables –version” command. Example:
# iptables --version
iptables v1.8.5 (legacy)
(legacy) means that trace events are logged to the kernel ring buffer. You will need to check dmesg orjournalctl. The debug output lacks some information but is conceptually similar to the one provided by the new tools. You will need to check the rule line numbers that are logged and correlate those to the active iptables ruleset yourself. If the output shows (nf_tables), you can use the xtables-monitor tool:
# xtables-monitor --trace
If the command only shows the version, you will also need to look at dmesg/journalctl instead. xtables-monitor uses the same kernel interface as the nft monitor trace tool. Their only difference is that it will print events in iptables syntax and that, if you use a mix of both iptables-nft and nft, it will be unable to print rules that use maps/sets and other nftables-only features.
Example
Lets assume you’d like to debug a non-working port forward to a virtual machine or container. The command “ssh -p 1222 10.1.2.3” should provide remote access to a container running on the machine with that address, but the connection attempt times out.
You have access to the host running the container image. Log in and add a trace rule. See the earlier example on how to add a temporary debug table. The trace rule looks like this:
nft "insert rule inet trace_debug trace_pre ip daddr 10.1.2.3 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1"
After the rule has been added, start nft in trace mode: nft monitor trace, then retry the failed ssh command. This will generate a lot of output if the ruleset is large. Do not worry about the large example output below – the next section will do a line-by-line walkthrough.
trace id 9c01f8 inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)
trace id 9c01f8 inet trace_debug trace_pre verdict continue
trace id 9c01f8 inet trace_debug trace_pre policy accept
trace id 9c01f8 inet nat prerouting packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3 tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)
trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200
trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)
trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
Line-by-line trace walkthrough
The first line generated is the packet id that triggered the subsequent trace output. Even though this is in the same grammar as the nft rule syntax, it contains header fields of the packet that was just received. You will find the name of the receiving network interface (here named “enp0”) the source and destination mac addresses of the packet, the source ip address (can be important – maybe the reporter is connecting from a wrong/unexpected host) and the tcp source and destination ports. You will also see a “trace id” at the very beginning. This identification tells which incoming packet matched a rule. The second line contains the first rule matched by the packet:
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)
This is the just-added trace rule. The first rule is always one that activates packet tracing. If there would be other rules before this, we would not see them. If there is no trace output at all, the trace rule itself is never reached or does not match. The next two lines tell that there are no further rules and that the “trace_pre” hook allows the packet to continue (verdict accept).
The next matching rule is
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3 tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)
This rule sets up a mapping to a different address and port. Provided 192.168.70.10 really is the address of the desired VM, there is no problem so far. If its not the correct VM address, the address was either mistyped or the wrong NAT rule was matched.
IP forwarding
Next we can see that the IP routing engine told the IP stack that the packet needs to be forwarded to another host:
trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200
This is another dump of the packet that was received, but there are a couple of interesting changes. There is now an output interface set. This did not exist previously because the previous rules are located before the routing decision (the prerouting hook). The id is the same as before, so this is still the same packet, but the address and port has already been altered. In case there are rules that match “tcp dport 1222” they will have no effect anymore on this packet.
If the line contains no output interface (oif), the routing decision steered the packet to the local host. Route debugging is a different topic and not covered here.
trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
This tells that the packet matched a rule that jumps to a chain named “allowed_dnats”. The next line shows the source of the connection failure:
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)
The rule unconditionally drops the packet, so no further log output for the packet exists. The next output line is the result of a different packet:
trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
The trace id is different, the packet however has the same content. This is a retransmit attempt: The first packet was dropped, so TCP re-tries. Ignore the remaining output, it does not contain new information. Time to inspect that chain.
Ruleset investigation
The previous section found that the packet is dropped in a chain named “allowed_dnats” in the inet filter table. Time to look at it:
# nft list chain inet filter allowed_dnats
table inet filter {
chain allowed_dnats {
meta nfproto ipv4 ip daddr . tcp dport @allow_in accept
drop
}
}
The rule that accepts packets in the @allow_in set did not show up in the trace log. Double-check that the address is in the @allow_set by listing the element:
# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
Error: Could not process rule: No such file or directory
As expected, the address-service pair is not in the set. We add it now.
Run the query command now, it will return the newly added element.
# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
table inet filter { set allow_in { type ipv4_addr . inet_service elements = { 192.168.70.10 . 22 } }
}
The ssh command should now work and the trace output reflects the change:
trace id 497abf58 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 497abf58 inet filter allowed_dnats rule meta nfproto ipv4 ip daddr . tcp dport @allow_in accept (verdict accept)
trace id 497abf58 ip postrouting packet: iif "enp0" oif "veth21" ether .. trace id 497abf58 ip postrouting policy accept
This shows the packet passes the last hook in the forwarding path – postrouting.
In case the connect is still not working, the problem is somewhere later in the packet pipeline and outside of the nftables ruleset.
Summary
This Article gave an introduction on how to check for packet drops and other sources of connectivity problems with the nftables trace mechanism. A later post in the series shows how to inspect the connection tracking subsystem and the NAT information that may be attached to tracked flows.
[This message comes directly from the desk of Matthew Miller, the Fedora Project Leader. — Ed.]
When I wrote about COVID-19 and the Fedora community all the way back on March 16, it was very unclear how 2020 was going to turn out. I hoped that we’d have everything under control and return to normal soon—we didn’t take our Flock to Fedora in-person conference off the table for another month. Back then, I naively hoped that this would be a short event and that life would return to normal soon. But of course, things got worse, and we had to reimagine Flock asa virtual event on short notice. We weren’t even sure if we’d be able to make our regular Fedora Linux releases on schedule.
Even without the pandemic, 2020 was already destined to be an interesting year. Because Red Hat moved the datacenter where most of Fedora’s servers live, our infrastructure team had to move our servers across the continent. Fedora 33 had the largest planned change set of any Fedora Linux release—and not small things either. We changed the default filesystem for desktop variants to BTRFS and promoted Fedora IoT to an Edition. We also began Fedora ELN—a new process which does a nightly build of Fedora’s development branch in the same configuration Red Hat would use to compose Red Hat Enterprise Linux. And Fedora’s popularity keeps growing, which means more users to support and more new community members to onboard. It’s great to be successful, but we also need to keep up with ourselves!
So, it was already busy. And then the pandemic came along. In many ways, we’re fortunate: we’re already a global community used to distributed work, and we already use chat-based meetings and video calls to collaborate. But it made the datacenter move more difficult. The closure of Red Hat offices meant that some of the QA hardware was inaccessible. We couldn’t gather together in person like we’re used to doing. And of course, we all worried about the safety of our friends and family. Isolation and disruption just plain make everything harder.
I’m always proud of the Fedora community, but this year, even more so. In a time of great stress and uncertainty, we came together and did our best work. Flock to Fedora became Nest With Fedora. Thanks to the heroic effort of Marie Nordin and many others, it was a resounding success. We had way more attendees than we’ve ever had at an in-person Flock, which made our community more accessible to contributors who can’t always join us. And we followed up with our first-ever virtual release party and an online Fedora Women’s Day, both also resounding successes.
And then, we shipped both Fedora 32 and Fedora 33 on time, extending our streak to six releases—three straight years of hitting our targets.
Like everyone, I’m looking ahead to 2021. The next few months are still going to be hard, but the amazing work on mRNA and other new vaccine technology means we have clear reasons to be optimistic. Through this trying year, the Fedora community is stronger than ever, and we have some great things to carry forward into better times: a Nest-like virtual event to compliment Flock, online release parties, our weekly Fedora Social Hour, and of course the CPE team’s great trivia events.
In 2021, we’ll keep doing the great work to push the state of the art forward. We’ll be bold in bringing new features into Fedora Linux. We’ll try new things even when we’re worried that they might not work, and we’ll learn from failures and try again. And we’ll keep working to make our community and our platform inclusive, welcoming, and accessible to all.
To everyone who has contributed to Fedora in any way, thank you. Packagers, blog writers, doc writers, testers, designers, artists, developers, meeting chairs, sysadmins, Ask Fedora answerers, D&I team, and more—you kicked ass this year and it shows. Stay safe and healthy, and we’ll meet again in person soon.Oh, one more thing! Join us for a Fedora Social Hour New Year’s Eve Special. We’ll meet at 23:30 UTC today in Hopin (the platform we used for Nest and other events). Hope to see you there!
Fedora 33 introduced a new default filesystem in desktop variants, Btrfs. After years of Fedora using ext4 on top of Logical Volume Manager (LVM) volumes, this is a big shift. Changing the default file system requires compelling reasons. While Btrfs is an exciting next-generation file system, ext4 on LVM is well established and stable. This guide aims to explore the high-level features of each and make it easier to choose between Btrfs and LVM-ext4.
In summary
The simplest advice is to stick with the defaults. A fresh Fedora 33 install defaults to Btrfs and upgrading a previous Fedora release continues to use whatever was initially installed, typically LVM-ext4. For an existing Fedora user, the cleanest way to get Btrfs is with a fresh install. However, a fresh install is much more disruptive than a simple upgrade. Unless there is a specific need, this disruption could be unnecessary. The Fedora development team carefully considered both defaults, so be confident with either choice.
What about all the other file systems?
There are a large number of file systems for Linux systems. The number explodes after adding in combinations of volume managers, encryption methods, and storage mechanisms . So why focus on Btrfs and LVM-ext4? For the Fedora audience these two setups are likely to be the most common. Ext4 on top of LVM became the default disk layout in Fedora 11, and ext3 on top of LVM came before that.
Now that Btrfs is the default for Fedora 33, the vast majority of existing users will be looking at whether they should stay where they are or make the jump forward. Faced with a fresh Fedora 33 install, experienced Linux users may wonder whether to use this new file system or fall back to what they are familiar with. So out of the wide field of possible storage options, many Fedora users will wonder how to choose between Btrfs and LVM-ext4.
Commonalities
Despite core differences between the two setups, Btrfs and LVM-ext4 actually have a lot in common. Both are mature and well-tested storage technologies. LVM has been in continuous use since the early days of Fedora Core and ext4 became the default in 2009 with Fedora 11. Btrfs merged into the mainline Linux kernel in 2009 and Facebook uses it widely. SUSE Linux Enterprise 12 made it the default in 2014. So there is plenty of production run time there as well.
Both systems do a great job preventing file system corruption due to unexpected power outages, even though the way they accomplish it is different. Supported configurations include single drive setups as well as spanning multiple devices, and both are capable of creating nearly instant snapshots. A variety of tools exist to help manage either system, both with the command line and graphical interfaces. Either solution works equally well on home desktops and on high-end servers.
The ext4 file system focuses on high-performance and scalability, without a lot of extra frills. It is effective at preventing fragmentation over extended periods of time and provides nice tools for when it does happen. Ext4 is rock solid because it built on the previous ext3 file system, bringing with it all the years of in-system testing and bug fixes.
Most of the advanced capabilities in the LVM-ext4 setup come from LVM itself. LVM sits “below” the file system, which means it supports any file system. Logical volumes (LV) are generic block devices so virtual machines can use them directly. This flexibility allows each logical volume to use the right file system, with the right options, for a variety of situations. This layered approach also honors the Unix philosophy of small tools working together.
The volume group (VG) abstraction from the hardware allows LVM to create flexible logical volumes. Each LV pulls from the same storage pool but has its own configuration. Resizing volumes is a lot easier than resizing physical partitions as there are no limitation of ordered placement of the data. LVM physical volumes (PV) can be any number of partitions and can even move between devices while the system is running.
LVM supports read-only and read-write snapshots, which make it easy to create consistent backups from active systems. Each snapshot has a defined size, and a change to the source or snapshot volume use space from there. Alternately, logical volumes can also be part of a thinly provisioned pool. This allows snapshots to automatically use data from a pool instead of consuming fixed sized chunks defined at volume creation.
Multiple devices with LVM
LVM really shines when there are multiple devices. It has native support for most RAID levels and each logical volume can have a different RAID level. LVM will automatically choose appropriate physical devices for the RAID configuration or the user can specify it directly. Basic RAID support includes data striping for performance (RAID0) and mirroring for redundancy (RAID1). Logical volumes can also use advanced setups like RAID5, RAID6, and RAID10. LVM RAID support is mature because under the hood LVM uses the same device-mapper (dm) and multiple-device (md) kernel support used by mdadm.
Logical volumes can also be cached volumes for systems with both fast and slow drives. A classic example is a combination of SSD and spinning-disk drives. Cached volumes use faster drives for more frequently accessed data (or as a write cache), and the slower drive for bulk data.
The large number of stable features in LVM and the reliable performance of ext4 are a testament to how long they have been in use. Of course, with more features comes complexity. It can be challenging to find the right options for the right feature when configuring LVM. For single drive desktop systems, features of LVM like RAID and cache volumes don’t apply. However, logical volumes are more flexible than physical partitions and snapshots are useful. For normal desktop use, the complexity of LVM can also be a barrier to recovering from issues a typical user might encounter.
Lessons learned from previous generations guided the features built into Btrfs. Unlike ext4, it can directly span multiple devices, so it brings along features typically found only in volume managers. It also has features that are unique in the Linux file system space (ZFS has a similar feature set, but don’t expect it in the Linux kernel).
Key Btrfs features
Perhaps the most important feature is the checksumming of all data. Checksumming, along with copy-on-write, provides the key method of ensuring file system integrity after unexpected power loss. More uniquely, checksumming can detect errors in the data itself. Silent data corruption, sometimes referred to as bitrot, is more common that most people realize. Without active validation, corruption can end up propagating to all available backups. This leaves the user with no valid copies. By transparently checksumming all data, Btrfs is able to immediately detect any such corruption. Enabling the right dup or raid option allows the file system to transparently fix the corruption as well.
Copy-on-write (COW) is also a fundamental feature of Btrfs, as it is critical in providing file system integrity and instant subvolume snapshots. Snapshots automatically share underlying data when created from common subvolumes. Additionally, after-the-fact deduplication uses the same technology to eliminate identical data blocks. Individual files can use COW features by calling cp with the reflink option. Reflink copies are especially useful for copying large files, such as virtual machine images, that tend to have mostly identical data over time.
Btrfs supports spanning multiple devices with no volume manager required. Multiple device support unlocks data mirroring for redundancy and striping for performance. There is also experimental support for more advanced RAID levels, such as RAID5 and RAID6. Unlike standard RAID setups, the Btrfs raid1 option actually allows an odd number of devices. For example, it can use 3 devices, even if they are are different sizes.
All RAID and dup options are specified at the file system level. As a consequence, individual subvolumes cannot use different options. Note that using the RAID1 option with multiple devices means that all data in the volume is available even if one device fails and the checksum feature maintains the integrity of the data itself. That is beyond what current typical RAID setups can provide.
Additional features
Btrfs also enables quick and easy remote backups. Subvolume snapshots can be sent to a remote system for storage. By leveraging the inherent COW meta-data in the file system, these transfers are efficient by only sending incremental changes from previously sent snapshots. User applications such as snapper make it easy to manage these snapshots.
Additionally, a Btrfs volume can have transparent compression and chattr +c will mark individual files or directories for compression. Not only does compression reduce the space consumed by data, but it helps extend the life of SSDs by reducing the volume of write operations. Compression certainly introduces additional CPU overhead, but a lot of options are available to dial in the right trade-offs.
The integration of file system and volume manager functions by Btrfs means that overall maintenance is simpler than LVM-ext4. Certainly this integration comes with less flexibility, but for most desktop, and even server, setups it is more than sufficient.
Btrfs on LVM
Btrfs can convert an ext3/ext4 file system in place. In-place conversion means no data to copy out and then back in. The data blocks themselves are not even modified. As a result, one option for an existing LVM-ext4 systems is to leave LVM in place and simply convert ext4 over to Btrfs. While doable and supported, there are reasons why this isn’t the best option.
Some of the appeal of Btrfs is the easier management that comes with a file system integrated with a volume manager. By running on top of LVM, there is still some other volume manager in play for any system maintenance. Also, LVM setups typically have multiple fixed sized logical volumes with independent file systems. While Btrfs supports multiple volumes in a given computer, many of the nice features expect a single volume with multiple subvolumes. The user is still stuck manually managing fixed sized LVM volumes if each one has an independent Btrfs volume. Though, the ability to shrink mounted Btrfs filesystems does make working with fixed sized volumes less painful. With online shrink there is no need to boot a live image.
The physical locations of logical volumes must be carefully considered when using the multiple device support of Btrfs. To Btrfs, each LV is a separate physical device and if that is not actually the case, then certain data availability features might make the wrong decision. For example, using raid1 for data typically provides protection if a single drive fails. If the actual logical volumes are on the same physical device, then there is no redundancy.
If there is a strong need for some particular LVM feature, such as raw block devices or cached logical volumes, then running Btrfs on top of LVM makes sense. In this configuration, Btrfs still provides most of its advantages such as checksumming and easy sending of incremental snapshots. While LVM has some operational overhead when used, it is no more so with Btrfs than with any other file system.
Wrap up
When trying to choose between Btrfs and LVM-ext4 there is no single right answer. Each user has unique requirements, and the same user may have different systems with different needs. Take a look at the feature set of each configuration, and decide if there is something compelling about one over the other. If not, there is nothing wrong with sticking with the defaults. There are excellent reasons to choose either setup.
The kernel team is working on final integration for kernel 5.10. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, January 04, 2021 through Monday, January 11, 2021. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.
How does a test week work?
A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.
To contribute, you only need to be able to do the following things:
Download test materials, which include some large files
Read and follow directions step by step
The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.
Happy testing, and we hope to see you on test day.
Fedora CoreOS is a lightweight, secure operating system optimized for running containerized workloads. A YAML document is all you need to describe the workload you’d like to run on a Fedora CoreOS server.
This is wonderful for a single server, but how would you describe a fleet of cooperating Fedora CoreOS servers? For example, what if you wanted a set of servers running load balancers, others running a database cluster and others running a web application? How can you get them all configured and provisioned? How can you configure them to communicate with each other? This article looks at how Terraform solves this problem.
Getting started
Before you start, decide whether you need to review the basics of Fedora CoreOS. Check out this previous article on the Fedora Magazine:
Terraform is an open source tool for defining and provisioning infrastructure. Terraform defines infrastructure as code in files. It provisions infrastructure by calculating the difference between the desired state in code and observed state and applying changes to remove the difference.
HashiCorp, the company that created and maintains Terraform, offers an RPM repository to install Terraform.
To get yourself familiar with the tools, start with a simple example. You’re going to create a single Fedora CoreOS server in AWS. To follow along, you need to install awscli and have an AWS account. awscli can be installed from the Fedora repositories and configured using the aws configure command
sudo dnf install -y awscli
aws configure
Please note, AWS is a paid service. If executed correctly, participants should expect less than $1 USD in charges, but mistakes may lead to unexpected charges.
Configuring Terraform
In a new directory, create a file named config.yaml. This file will hold the contents of your Fedore CoreOS configuration. The configuration simply adds an SSH key for the core user. Modify theauthorized_ssh_key section to use your own.
Next, create a file main.tf to contain your Terraform specification. Take a look at the contents section by section. It begins with a block to specify the versions of your providers.
Terraform uses providers to control infrastructure. Here it uses the AWS provider to provision EC2 servers, but it can provision any kind of AWS infrastructure. The ct provider from Poseidon Labs stands for config transpiler. This provider will transpile Fedora CoreOS configurations into Ignition configurations. As a result, you do not need to use fcct to transpile your configurations. Now that your provider versions are specified, initialize them.
provider "aws" { region = "us-west-2"
} provider "ct" {}
The AWS region is set to us-west-2 and the ct provider requires no configuration. With the providers configured, you’re ready to define some infrastructure. Use a data source block to read the configuration.
With this data block defined, you can now access the transpiled Ignition output as data.ct_config.config.rendered. To create an EC2 server, use a resource block, and pass the Ignition output as the user_data attribute.
This configuration hard-codes the virtual machine image (AMI) to the latest stable image of Fedora CoreOS in the us-west-2 region at time of writing. If you would like to use a different region or stream, you can discover the correct AMI on the Fedora CoreOS downloads page.
Finally, you’d like to know the public IP address of the server once it’s created. Use an output block to define the outputs to be displayed once Terraform completes its provisioning.
output "instance_ip_addr" { value = aws_instance.server.public_ip
}
Alright! You’re ready to create some infrastructure. To deploy the server simply run:
terraform init # Installs the provider dependencies
terraform apply # Displays the proposed changes and applies them
Oncecompleted, Terraform prints the public IP address of the server, and you can SSH to the server by running ssh core@{public ip here}. Congratulations — you’ve provisioned your first Fedora CoreOS server using Terraform!
Updates and immutability
At this point you can modify the configuration in config.yaml however you like. To deploy your change simply run terraform apply again. Notice that each time you change the configuration, when you run terraform apply it destroys the server and creates a new one. This aligns well with the Fedora CoreOS philosophy: Configuration can only happen once. Want to change that configuration? Create a new server. This can feel pretty alien if you’re accustomed to provisioning your servers once and continuously re-configuring them with tools like Ansible, Puppet or Chef.
The benefit of always creating new servers is that it is significantly easier to test that newly provisioned servers will act as expected. It can be much more difficult to account for all of the possible ways in which updating a system in place may break. Tooling that adheres to this philosophy typically falls under the heading of Immutable Infrastructure. This approach to infrastructure has some of the same benefits seen in functional programming techniques, namely that mutable state is often a source of error.
Using variables
You can use Terraform input variables to parameterize your infrastructure. In the previous example, you might like to parameterize the AWS region or instance type. This would let you deploy several instances of the same configuration with differing parameters. What if you want to parameterize the Fedora CoreOS configuration? Do so using the templatefile function.
As an example, try parameterizing the username of your user. To do this, add a username variable to the main.tf file:
To deploy with username set to jane, run terraform apply -var=”username=jane”. To verify, try to SSH into the server with ssh jane@{public ip address}.
Leveraging the dependency graph
Passing variables from Terraform into Fedora CoreOS configuration is quite useful. But you can go one step further and pass infrastructure data into the server configuration. This is where Terraform and Fedora CoreOS start to really shine.
Terraform creates a dependency graph to model the state of infrastructure and to plan updates. If the output of one resource (e.g the public IP address of a server) is passed as the input of another service (e.g the destination in a firewall rule), Terraform understands that changes in the former require recreating or modifying the later. If you pass infrastructure data into a Fedora CoreOS configuration, it will participate in the dependency graph. Updates to the inputs will trigger creation of a new server with the new configuration.
Consider a system of one load balancer and three web servers as an example.
The goal is to configure the load balancer with the IP address of each web server so that it can forward traffic to them.
Web server configuration
First, create a file web.yaml and add a simple Nginx configuration with a templated message.
Notice the use of count = 3 and the count.index variable. You can use count to make many copies of a resource. Here, it creates three configurations and three web servers. The count.index variable is used to pass the first configuration to the first web server and so on.
Load balancer configuration
The load balancer will be a basic HAProxy load balancer that forwards to each server. Place the configuration in a file named lb.yaml:
The template expects a map with server names as keys and IP addresses as values. You can create that using the zipmap function. Use the ID of the web servers as keys and the public IP addresses as values.
Finally, add an output block to display the IP address of the load balancer.
output "load_balancer_ip" { value = aws_instance.lb.public_ip
}
All right! Run terraform apply and the IP address of the load balancer displays on completion. You should be able to make requests to the load balancer and get responses from each web server.
$ export LB={{load balancer IP here}}
$ curl $LB
<html> <h1>Hello from Server 0</h1>
</html>
$ curl $LB
<html> <h1>Hello from Server 1</h1>
</html>
$ curl $LB
<html> <h1>Hello from Server 2</h1>
</html>
Now you can modify the configuration of the web servers or load balancer. Any changes can be realized by running terraform apply once again. Note in particular that any change to the web server IP addresses will cause Terraform to recreate the load balancer (changing the count from 3 to 4 is a simple test). Hopefully this emphasizes that the load balancer configuration is indeed a part of the Terraform dependency graph.
Clean up
You can destroy all the infrastructure using the terraform destroy command. Simply navigate to the folder where you created main.tf and run terraform destroy.
Where next?
Code for this tutorial can be found at this GitHub repository. Feel free to play with examples and contribute more if you find something you’d love to share with the world. To learn more about all the amazing things Fedora CoreOS can do, dive into the docs or come chat with the community. To learn more about Terraform, you can rummage through the docs, checkout #terraform on freenode, or contribute on GitHub.
COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open-source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the COPR User Documentation for how to get started.
Blanket
Blanket is an application for playing background sounds, which may potentially improve your focus and increase your productivity. Alternatively, it may help you relax and fall asleep in a noisy environment. No matter what time it is or where you are, Blanket allows you to wake up while birds are chirping, work surrounded by friendly coffee shop chatter or distant city traffic, and then sleep like a log next to a fireplace while it is raining outside. Other popular choices for background sounds such as pink and white noise are also available.
Installation instructions
The repo currently provides Blanket for Fedora 32 and 33. To install it, use these commands:
k9s is a command-line tool for managing Kubernetes clusters. It allows you to list and interact with running pods, read their logs, dig through used resources, and overall make the Kubernetes life easier. With its extensibility through plugins and customizable UI, k9s is welcoming to power-users.
The repo currently provides k9s for Fedora 32, 33, and Fedora Rawhide as well as EPEL 7, 8, Centos Stream, and others. To install it, use these commands:
rhbzquery is a simple tool for querying the Fedora Bugzilla instance. It provides an interface for specifying the search query but it doesn’t list results in the command-line. Instead, rhbzquery generates a Bugzilla URL and opens it in a web browser.
Installation instructions
The repo currently provides rhbzquery for Fedora 32, 33, and Fedora Rawhide. To install it, use these commands:
gping is a more visually intriguing alternative to the standard ping command, as it shows results in a graph. It is also possible to ping multiple hosts at the same time to easily compare their response times.
Installation instructions
The repo currently provides gping for Fedora 32, 33, and Fedora Rawhide as well as for EPEL 7 and 8. To install it, use these commands:
This article shows the reader how easy it is to get started using pods with Podman on Fedora. But what is Podman? Well, we will start by saying that Podman is a container engine developed by Red Hat, and yes, if you thought about Docker when reading container engine, you are on the right track. A whole new revolution of containerization started with Docker, and Kubernetes added the concept of pods in the area of container orchestration when dealing with containers that share some common resources. But hold on! Do you really think it is worth sticking with Docker alone by assuming it’s the only effective way of containerization? Podman can also manage pods on Fedora as well as the containers used in those pods.
Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images.
From the official Podman documentation at http://docs.podman.io/en/latest/
Why should we switch to Podman?
Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Podman directly interacts with an image registry, containers and image storage.
Install Podman:
sudo dnf -y install podman
Creating a Pod:
To start using the pod we first need to create it and for that we have a basic command structure
$ podman pod create
The command above contains no arguments and hence it will create a pod with a randomly generated name. You might however, want to give your pod a relevant name. For that you just need to modify the above command a bit.
$ podman pod create --name climoiselle
The pod will be created and will report back to you the ID of the pod. In the example shown the pod was given the name ‘climoiselle’. To view the newly created pod is easy by using the command shown below:
$ podman pod list
Newly created pods have been deployed
As you can see, there are two pods listed here, one named darshna and the one created from the example named climoiselle. No doubt you notice that both pods already include one container, yet we sisn’t deploy a container to the pods yet.
What is that extra container inside the pod? This randomly generated container is an infra container. Every podman pod includes this infra container and in practice these containers do nothing but go to sleep. Their purpose is to hold the namespaces associated with the pod and to allow Podman to connect other containers to the pod. The other purpose of the infra container is to allow the pod to keep running when all associated containers have been stopped.
You can also view the individual containers within a pod with the command:
$ podman ps -a --pod
Add a container
The cool thing is, you can add more containers to your newly deployed pod. Always remember the name of your pod. It’s important as you’ll need that name in order to deploy the container in that pod. We’ll use the official ubuntu image and deploy a container using it running the top command.
$ podman run -dt --pod climoiselle ubuntu top
Everything in a Single Command:
Podman has an agile characteristic when it comes to deploying a container in a pod which you created. You can create a pod and deploy a container to the said pod with a single command using Podman. Let’s say you want to deploy an NGINX container, exposing external port 8080 to internal port 80 to a new pod named test_server.
$ podman run -dt --pod new:test_server -p 8080:80 nginx
Created a new pod and deployed a container together
Let’s check all pods that have been created and the number of containers running in each of them …
$ podman pod list
List of the containers, their state and number of containers running into them
Do you want to know a detailed configuration of the pods which are running? Just type in the command shown below:
podman pod inspect [pod's name/id]
Make it stop!
To stop the pods, we need to use the name or ID of the pod. With the information from podman’s pod list command, we can view the pods and their infra id. Simply use podman with the command stop and give the particular name/infra id of the pod.
$ podman pod stop climoiselle
Hey take a look!
My pod climoiselle stopped
After following this short tutorial, you can see how quickly you can use pods with podman on fedora. It’s an easy and convenient way to use containers that share resources and interact together.