Posted on Leave a comment

Upgrading Fedora 29 to Fedora 30

Fedora 30 is available now. You’ll likely want to upgrade your system to the latest version of Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 29 to Fedora 30.

Upgrading Fedora 29 Workstation to Fedora 30

Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the GNOME Software app. Or you can choose Software from GNOME Shell.

Choose the Updates tab in GNOME Software and you should see a screen informing you that Fedora 30 is Now Available.

If you don’t see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available.

Choose Download to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.

Using the command line

If you’ve upgraded from past Fedora releases, you are likely familiar with the dnf upgrade plugin. This method is the recommended and supported way to upgrade from Fedora 29 to Fedora 30. Using this plugin will make your upgrade to Fedora 30 simple and easy.

1. Update software and back up your system

Before you do anything, you will want to make sure you have the latest software for Fedora 39 before beginning the upgrade process. To update your software, use GNOME Software or enter the following command in a terminal.

sudo dnf upgrade --refresh

Additionally, make sure you back up your system before proceeding. For help with taking a backup, see the backup series on the Fedora Magazine.

2. Install the DNF plugin

Next, open a terminal and type the following command to install the plugin:

sudo dnf install dnf-plugin-system-upgrade

3. Start the update with DNF

Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:

sudo dnf system-upgrade download --releasever=30

This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the ‐‐allowerasing flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.

4. Reboot and upgrade

Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:

sudo dnf system-upgrade reboot

Your system will restart after this. Many releases ago, the fedup tool would create a new option on the kernel selection / boot screen. With the dnf-plugin-system-upgrade package, your system reboots into the current kernel installed for Fedora 29; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.

Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 30 system.

Upgrading Fedora: Upgrade complete!

Resolving upgrade problems

On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the DNF system upgrade wiki page for more information on troubleshooting in the event of a problem.

If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.

Posted on Leave a comment

What’s new in Fedora 30 Workstation

Fedora 30 Workstation is the latest groundbreaking release of our free, leading-edge operating system. You can download it from the official website here right now. There are several new and noteworthy changes in Fedora Workstation. Read more details below.

GNOME 3.32

Fedora 30 Workstation includes the latest release of this simple, beautiful desktop environment for users of all types. There are numerous improvements throughout GNOME 3.32, including:

  • A refreshed visual style with buttons and switches that are easier to identify and use
  • Completely refreshed icons for applications
  • Consistent user icons across the desktop
  • Snappier performance thanks to fixes and enhancements in the core GNOME libraries
  • An Applications panel that controls permissions, to make use of Flatpak apps easier
  • …and much more!

Do you want the full details of everything in GNOME 3.32? Visit the release notes for even more community provided goodness.

Silverblue

You can also try Fedora Silverblue — it’s all the features of Workstation but combined with the rpm-ostree features of Fedora Atomic. Worry-free upgrades (with backouts) are just one of the benefits of this technology. You can also install your favorite Flatpak or RPM packaged apps on top.

Silverblue continues to develop now and in future releases. Learn how you can contribute by visiting the Silverblue team’s website.

Posted on Leave a comment

Announcing the release of Fedora 30

It seems like it was just six months ago that we announced Fedora 29, and here we are again. Today, we announce our next operating system release. Even though it went so quickly, a lot has happened in the last half year, and you’ll see the results in Fedora 30.

If you’re impatient, go to https://getfedora.org/ now. For details, read on.

Variants and more

Fedora Editions are targeted outputs geared toward specific “showcase” uses. Since we first started using this concept in the Fedora 21 release, the needs of the community have continued to evolve. As part of Fedora 30, we’re combining cloud and server into the Fedora Server edition. We’re bringing in Fedora CoreOS to replace Fedora Atomic Host as our container-focused deliverable in the Fedora 30 timeframe — stay tuned for that. The Fedora Workstation edition continues to focus on delivering the latest in open source desktop tools.

Of course, we produce more than just the editions. Fedora Spins and Labs target a variety of audiences and use cases, including the Internet of Things. And, we haven’t forgotten our alternate architectures, ARM AArch64, Power, and S390x.

Fedora Workstation features GNOME 3.32 — the latest release of this popular desktop environment. GNOME 3.32 features an updated visual style, including the user interface, the icons, and the desktop itself. New to Fedora Server are Linux System Roles — a collection of roles and modules executed by Ansible to assist Linux admins in the configuration of common GNU/Linux subsystems

No matter what variant of Fedora you use, you’re getting the latest the open source world has to offer. GCC 9, Bash 5.0, and PHP 7.3 are among the many updated packages in Fedora 30. We’re excited for you to try it out. So go to https://getfedora.org/ and download it now. Or if you’re already running a Fedora release, follow the easy upgrade instructions.

Along with the release of Fedora 30, we’re moving our “Ask Fedora” support forum to the Discourse platform. Log in to Ask Fedora to try it out and watch for a Fedora Magazine article about it soon.

As always, thanks to the thousands of people who contributed in some way to the Fedora Project in this release cycle, and to the Fedora heroes who helped get this release out on schedule even with so much else going on. If you’re in Boston for Red Hat Summit next week, whether you are one of these contributors, would like to be one in the future, or just a friend, make sure to visit the Fedora booth in Community Central!

Posted on Leave a comment

Awk utility in Fedora

Fedora provides awk as part of its default installation, including all its editions, including the immutable ones like Silverblue. But you may be asking, what is awk and why would you need it?

Awk is a data driven programming language that acts when it matches a pattern. On Fedora, and most other distributions, GNU awk or gawk is used. Read on for more about this language and how to use it.

A brief history of awk

Awk began at Bell Labs in 1977. Its name is an acronym from the initials of the designers: Alfred V. Aho, Peter J. Weinberger, and Brian W. Kernighan.

The specification for awk in the POSIX Command Language and Utilities standard further clarified the language. Both the gawk designers and the original awk designers at Bell Laboratories provided feedback for the POSIX specification.

From The GNU Awk User’s Guide

For a more in-depth look at how awk/gawk ended up being as powerful and useful as it is, follow the link above. Numerous individuals have contributed to the current state of gawk. Among those are:

  • Arnold Robbins and David Trueman, the creators of gawk
  • Michael Brennan, the creator of mawk, which later was merged with gawk
  • Jurgen Kahrs, who added networking capabilities to gawk in 1997
  • John Hague, who rewrote the gawk internals and added an awk-level debugger in 2011

Using awk

The following sections show various ways of using awk in Fedora.

At the command line

The simples way to invoke awk is at the command line. You can search a text file for a particular pattern, and if found, print out the line(s) of the file that match the pattern anywhere. As an example, use cat to take a look at the command history file in your home director:

$ cat ~/.bash_history

There are probably many lines scrolling by right now.

Awk helps with this type of file quite easily. Instead of printing the entire file out to the terminal like cat, you can use awk to find something of specific interest. For this example, type the following at the command line if you’re running a standard Fedora edition:

$ awk '/dnf/' ~/.bash_history

If you’re running Silverblue, try this instead:

$ awk '/rpm-ostree/' ~/.bash_history

In both cases, more data likely appears than what you really want. That’s no problem for awk since it can accept regular expressions. Using the previous example, you can change the pattern to more closely match search requirements of wanting to know about installs only. Try changing the search pattern to one of these:

$ awk '/rpm-ostree install/' ~/.bash_history
$ awk '/dnf isntall/' ~/.bash_history

All the entries of your bash command line history appear that have the pattern specified at any position along the line. Awk works on one line of a data file at a time. It matches pattern, then performs an action, then moves to next line until the end of file (EOF) is reached.

From an awk program

Using awk at the command line as above is not much different than piping output to grep, like this:

$ cat .bash_history | grep 'dnf install'

The end result of printing to standard output (stdout) is the same with both methods.

Awk is a programming language, and the command awk is an interpreter of that language. The real power and flexibility of awk is you can make programs with it, and combine them with shell scripts to create even more powerful programs. For more feature rich development with awk, you can also incorporate C or C++ code using Dynamic-Extensions.

Next, to show the power of awk, let’s make a couple of program files to print the header and draw five numbers for the first row of a bingo card. To do this we’ll create two awk program files.

The first file prints out the header of the bingo card. For this example it is called bingo-title.awk. Use your favorite editor to save this text as that file name:

 
BEGIN {
    print "B\tI\tN\tG\tO"
}

Now the title program is ready. You could try it out with this command:

$ awk -f bingo-title.awk

The program prints the word BINGO, with a tab space (\t) between the characters. For the number selection, let’s use one of awk’s builtin numeric functions called rand() and use two of the control statements, for and switch. (Except the editor changed my program, so no switch statement used this time).

The title of the second awk program is bingo-num.awk. Enter the following into your favorite editor and save with that file name:

 
@include "bingo-title.awk"
BEGIN {
    for (i = 1; i < = 5; i++) {
    b = int(rand() * 15) + (15*(i-1))
    printf "%s\t", b
    }
    print
}

The @include statement in the file tells the interpreter to process the included file first. In this case the interpreter processs the bingo-title.awk file so the title prints out first.

Running the test program

Now enter the command to pick a row of bingo numbers:

$ awk -f bingo-num.awk

Output appears similar to the following. Note that the rand() function in awk is not ideal for truly random numbers. It’s used here only as for example purposes.

 
$ awk -f bingo-num.awk
B   I   N   G   O
13  23  34  53  71

In the example, we created two programs with only beginning sections that used actions to manipulate data generated from within the awk program. In order to satisfy the rules of Bingo, more work is needed to achieve the desirable results. The reader is encouraged to fix the programs so they can reliably pick bingo numbers, maybe look at the awk function srand() for answers on how that could be done.

Final examples

Awk can be useful even for mundane daily search tasks that you encounter, like listing all flatpak’s on the Flathub repository from org.gnome (providing you have the Flathub repository setup). The command to do that would be:

$ flatpak remote-ls flathub --system | awk /org.gnome/

A listing appears that shows all output from remote-ls that matches the org.gnome pattern. To see flatpaks already installed from org.gnome, enter this command:

$ flatpak list --system | awk /org.gnome/

Awk is a powerful and flexible programming language that fills a niche with text file manipulation exceedingly well.

Posted on Leave a comment

Automate backups with restic and systemd

Timely backups are important. So much so that backing up software is a common topic of discussion, even here on the Fedora Magazine. This article demonstrates how to automate backups with restic using only systemd unit files.

For an introduction to restic, be sure to check out our article Use restic on Fedora for encrypted backups. Then read on for more details.

Two systemd services are required to run in order to automate taking snapshots and keeping data pruned. The first service runs the backup command needs to be run on a regular frequency. The second service takes care of data pruning.

If you’re not familiar with systemd at all, there’s never been a better time to learn. Check out the series on systemd here at the Magazine, starting with this primer on unit files:

If you haven’t installed restic already, note it’s in the official Fedora repositories. To install use this command with sudo:

$ sudo dnf install restic

Backup

First, create the ~/.config/systemd/user/restic-backup.service file. Copy and paste the text below into the file for best results.

[Unit]
Description=Restic backup service
[Service]
Type=oneshot
ExecStartPre=restic unlock
ExecStart=restic backup --verbose --one-file-system --tag systemd.timer $BACKUP_EXCLUDES $BACKUP_PATHS
ExecStartPost=restic forget --verbose --tag systemd.timer --group-by "paths,tags" --keep-daily $RETENTION_DAYS --keep-weekly $RETENTION_WEEKS --keep-monthly $RETENTION_MONTHS --keep-yearly $RETENTION_YEARS
EnvironmentFile=%h/.config/restic-backup.conf

This service references an environment file in order to load secrets (such as RESTIC_PASSWORD). Create the ~/.config/restic-backup.conf file. Copy and paste the content below for best results. This example uses BackBlaze B2 buckets. Adjust the ID, key, repository, and password values accordingly.

BACKUP_PATHS="/home/rupert"
BACKUP_EXCLUDES="--exclude-file /home/rupert/.restic_excludes --exclude-if-present .exclude_from_backup"
RETENTION_DAYS=7
RETENTION_WEEKS=4
RETENTION_MONTHS=6
RETENTION_YEARS=3
B2_ACCOUNT_ID=XXXXXXXXXXXXXXXXXXXXXXXXX
B2_ACCOUNT_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RESTIC_REPOSITORY=b2:XXXXXXXXXXXXXXXXXX:/
RESTIC_PASSWORD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Now that the service is installed, reload systemd: systemctl –user daemon-reload. Try running the service manually to create a backup: systemctl –user start restic-backup.

Because the service is a oneshot, it will run once and exit. After verifying that the service runs and creates snapshots as desired, set up a timer to run this service regularly. For example, to run the restic-backup.service daily, create ~/.config/systemd/user/restic-backup.timer as follows. Again, copy and paste this text:

[Unit]
Description=Backup with restic daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target

Enable it by running this command:

$ systemctl --user start restic-backup.timer

Prune

While the main service runs the forget command to only keep snapshots within the keep policy, the data is not actually removed from the restic repository. The prune command inspects the repository and current snapshots, and deletes any data not associated with a snapshot. Because prune can be a time-consuming process, it is not necessary to run every time a backup is run. This is the perfect scenario for a second service and timer. First, create the file ~/.config/systemd/user/restic-prune.service by copying and pasting this text:

[Unit]
Description=Restic backup service (data pruning)
[Service]
Type=oneshot
ExecStart=restic prune
EnvironmentFile=%h/.config/restic-backup.conf

Similarly to the main restic-backup.service, restic-prune is a oneshot service and can be run manually. Once the service has been set up, create and enable a corresponding timer at ~/.config/systemd/user/restic-prune.timer:

[Unit]
Description=Prune data from the restic repository monthly
[Timer]
OnCalendar=monthly
Persistent=true
[Install]
WantedBy=timers.target

That’s it! Restic will now run daily and prune data monthly.


Photo by Samuel Zeller on Unsplash.

Posted on Leave a comment

Managing RAID arrays with mdadm

Mdadm stands for Multiple Disk and Device Administration. It is a command line tool that can be used to manage software RAID arrays on your Linux PC. This article outlines the basics you need to get started with it.

The following five commands allow you to make use of mdadm’s most basic features:

  1. Create a RAID array:
    # mdadm –create /dev/md/test –homehost=any –metadata=1.0 –level=1 –raid-devices=2 /dev/sda1 /dev/sdb1
  2. Assemble (and start) a RAID array:
    # mdadm –assemble /dev/md/test /dev/sda1 /dev/sdb1
  3. Stop a RAID array:
    # mdadm –stop /dev/md/test
  4. Delete a RAID array:
    # mdadm –zero-superblock /dev/sda1 /dev/sdb1
  5. Check the status of all assembled RAID arrays:
    # cat /proc/mdstat

Notes on features

mdadm --create

The create command shown above includes the following four parameters in addition to the create parameter itself and the device names:

  1. –homehost:
    By default, mdadm stores your computer’s name as an attribute of the RAID array. If your computer name does not match the stored name, the array will not automatically assemble. This feature is useful in server clusters that share hard drives because file system corruption usually occurs if multiple servers attempt to access the same drive at the same time. The name any is reserved and disables the homehost restriction.
  2. –metadata:
    mdadm reserves a small portion of each RAID device to store information about the RAID array itself. The metadata parameter specifies the format and location of the information. The value 1.0 indicates to use version-1 formatting and store the metadata at the end of the device.
  3. –level:
    The level parameter specifies how the data should be distributed among the underlying devices. Level 1 indicates each device should contain a complete copy of all the data. This level is also known as disk mirroring.
  4. –raid-devices:
    The raid-devices parameter specifies the number of devices that will be used to create the RAID array.

By using level=1 (mirroring) in combination with metadata=1.0 (store the metadata at the end of the device), you create a RAID1 array whose underlying devices appear normal if accessed without the aid of the mdadm driver. This is useful in the case of disaster recovery, because you can access the device even if the new system doesn’t support mdadm arrays. It’s also useful in case a program needs read-only access to the underlying device before mdadm is available. For example, the UEFI firmware in a computer may need to read the bootloader from the ESP before mdadm is started.

mdadm --assemble

The assemble command above fails if a member device is missing or corrupt. To force the RAID array to assemble and start when one of its members is missing, use the following command:

# mdadm --assemble --run /dev/md/test /dev/sda1

Other important notes

Avoid writing directly to any devices that underlay a mdadm RAID1 array. That causes the devices to become out-of-sync and mdadm won’t know that they are out-of-sync. If you access a RAID1 array with a device that’s been modified out-of-band, you can cause file system corruption. If you modify a RAID1 device out-of-band and need to force the array to re-synchronize, delete the mdadm metadata from the device to be overwritten and then re-add it to the array as demonstrated below:

# mdadm --zero-superblock /dev/sdb1
# mdadm --assemble --run /dev/md/test /dev/sda1
# mdadm /dev/md/test --add /dev/sdb1

These commands completely overwrite the contents of sdb1 with the contents of sda1.

To specify any RAID arrays to automatically activate when your computer starts, create an /etc/mdadm.conf configuration file.

For the most up-to-date and detailed information, check the man pages:

$ man mdadm 
$ man mdadm.conf

The next article of this series will show a step-by-step guide on how to convert an existing single-disk Linux installation to a mirrored-disk installation, that will continue running even if one of its hard drives suddenly stops working!

Posted on Leave a comment

Kubernetes on Fedora IoT with k3s

Fedora IoT is an upcoming Fedora edition targeted at the Internet of Things. It was introduced last year on Fedora Magazine in the article How to turn on an LED with Fedora IoT. Since then, it has continued to improve together with Fedora Silverblue to provide an immutable base operating system aimed at container-focused workflows.

Kubernetes is an immensely popular container orchestration system. It is perhaps most commonly used on powerful hardware handling huge workloads. However, it can also be used on lightweight devices such as the Raspberry Pi 3. Read on to find out how.

Why Kubernetes?

While Kubernetes is all the rage in the cloud, it may not be immediately obvious to run it on a small single board computer. But there are certainly reasons for doing it. First of all it is a great way to learn and get familiar with Kubernetes without the need for expensive hardware. Second, because of its popularity, there are tons of applications that comes pre-packaged for running in Kubernetes clusters. Not to mention the large community to provide help if you ever get stuck.

Last but not least, container orchestration may actually make things easier, even at the small scale in a home lab. This may not be apparent when tackling the the learning curve, but these skills will help when dealing with any cluster in the future. It doesn’t matter if it’s a single node Raspberry Pi cluster or a large scale machine learning farm.

K3s – a lightweight Kubernetes

A “normal” installation of Kubernetes (if such a thing can be said to exist) is a bit on the heavy side for IoT. The recommendation is a minimum of 2 GB RAM per machine! However, there are plenty of alternatives, and one of the newcomers is k3s – a lightweight Kubernetes distribution.

K3s is quite special in that it has replaced etcd with SQLite for its key-value storage needs. Another thing to note is that k3s ships as a single binary instead of one per component. This diminishes the memory footprint and simplifies the installation. Thanks to the above, k3s should be able to run k3s with just 512 MB of RAM, perfect for a small single board computer!

What you will need

  1. Fedora IoT in a virtual machine or on a physical device. See the excellent getting started guide here. One machine is enough but two will allow you to test adding more nodes to the cluster.
  2. Configure the firewall to allow traffic on ports 6443 and 8472. Or simply disable it for this experiment by running “systemctl stop firewalld”.

Install k3s

Installing k3s is very easy. Simply run the installation script:

curl -sfL https://get.k3s.io | sh -

This will download, install and start up k3s. After installation, get a list of nodes from the server by running the following command:

kubectl get nodes

Note that there are several options that can be passed to the installation script through environment variables. These can be found in the documentation. And of course, there is nothing stopping you from installing k3s manually by downloading the binary directly.

While great for experimenting and learning, a single node cluster is not much of a cluster. Luckily, adding another node is no harder than setting up the first one. Just pass two environment variables to the installation script to make it find the first node and avoid running the server part of k3s

curl -sfL https://get.k3s.io | K3S_URL=https://example-url:6443 \
K3S_TOKEN=XXX sh -

The example-url above should be replaced by the IP address or fully qualified domain name of the first node. On that node the token (represented by XXX) is found in the file /var/lib/rancher/k3s/server/node-token.

Deploy some containers

Now that we have a Kubernetes cluster, what can we actually do with it? Let’s start by deploying a simple web server.

kubectl create deployment my-server --image nginx

This will create a Deployment named “my-server” from the container image “nginx” (defaulting to docker hub as registry and the latest tag). You can see the Pod created by running the following command.

kubectl get pods

In order to access the nginx server running in the pod, first expose the Deployment through a Service. The following command will create a Service with the same name as the deployment.

kubectl expose deployment my-server --port 80

The Service works as a kind of load balancer and DNS record for the Pods. For instance, when running a second Pod, we will be able to curl the nginx server just by specifying my-server (the name of the Service). See the example below for how to do this.

# Start a pod and run bash interactively in it
kubectl run debug --generator=run-pod/v1 --image=fedora -it -- bash
# Wait for the bash prompt to appear
curl my-server
# You should get the "Welcome to nginx!" page as output

Ingress controller and external IP

By default, a Service only get a ClusterIP (only accessible inside the cluster), but you can also request an external IP for the service by setting its type to LoadBalancer. However, not all applications require their own IP address. Instead, it is often possible to share one IP address among many services by routing requests based on the host header or path. You can accomplish this in Kubernetes with an Ingress, and this is what we will do. Ingresses also provide additional features such as TLS encryption of the traffic without having to modify your application.

Kubernetes needs an ingress controller to make the Ingress resources work and k3s includes Traefik for this purpose. It also includes a simple service load balancer that makes it possible to get an external IP for a Service in the cluster. The documentation describes the service like this:

k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.

k3s README

The ingress controller is already exposed with this load balancer service. You can find the IP address that it is using with the following command.

$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 443/TCP 33d
default my-server ClusterIP 10.43.174.38 80/TCP 30m
kube-system kube-dns ClusterIP 10.43.0.10 53/UDP,53/TCP,9153/TCP 33d
kube-system traefik LoadBalancer 10.43.145.104 10.0.0.8 80:31596/TCP,443:31539/TCP 33d

Look for the Service named traefik. In the above example the IP we are interested in is 10.0.0.8.

Route incoming requests

Let’s create an Ingress that routes requests to our web server based on the host header. This example uses xip.io to avoid having to set up DNS records. It works by including the IP adress as a subdomain, to use any subdomain of 10.0.0.8.xip.io to reach the IP 10.0.0.8. In other words, my-server.10.0.0.8.xip.io is used to reach the ingress controller in the cluster. You can try this right now (with your own IP instead of 10.0.0.8). Without an ingress in place you should reach the “default backend” which is just a page showing “404 page not found”.

We can tell the ingress controller to route requests to our web server Service with the following Ingress.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-server
spec:
rules:
- host: my-server.10.0.0.8.xip.io
http:
paths:
- path: /
backend:
serviceName: my-server
servicePort: 80

Save the above snippet in a file named my-ingress.yaml and add it to the cluster by running this command:

kubectl apply -f my-ingress.yaml

You should now be able to reach the default nginx welcoming page on the fully qualified domain name you chose. In my example this would be my-server.10.0.0.8.xip.io. The ingress controller is routing the requests based on the information in the Ingress. A request to my-server.10.0.0.8.xip.io will be routed to the Service and port defined as backend in the Ingress (my-server and 80 in this case).

What about IoT then?

Imagine the following scenario. You have dozens of devices spread out around your home or farm. It is a heterogeneous collection of IoT devices with various hardware capabilities, sensors and actuators. Maybe some of them have cameras, weather or light sensors. Others may be hooked up to control the ventilation, lights, blinds or blink LEDs.

In this scenario, you want to gather data from all the sensors, maybe process and analyze it before you finally use it to make decisions and control the actuators. In addition to this, you may want to visualize what’s going on by setting up a dashboard. So how can Kubernetes help us manage something like this? How can we make sure that Pods run on suitable devices?

The simple answer is labels. You can label the nodes according to capabilities, like this:

kubectl label nodes <node-name> <label-key>=<label-value>
# Example
kubectl label nodes node2 camera=available

Once they are labeled, it is easy to select suitable nodes for your workload with nodeSelectors. The final piece to the puzzle, if you want to run your Pods on all suitable nodes is to use DaemonSets instead of Deployments. In other words, create one DaemonSet for each data collecting application that uses some unique sensor and use nodeSelectors to make sure they only run on nodes with the proper hardware.

The service discovery feature that allows Pods to find each other simply by Service name makes it quite easy to handle these kinds of distributed systems. You don’t need to know or configure IP addresses or custom ports for the applications. Instead, they can easily find each other through named Services in the cluster.

Utilize spare resources

With the cluster up and running, collecting data and controlling your lights and climate control you may feel that you are finished. However, there are still plenty of compute resources in the cluster that could be used for other projects. This is where Kubernetes really shines.

You shouldn’t have to worry about where exactly those resources are or calculate if there is enough memory to fit an extra application here or there. This is exactly what orchestration solves! You can easily deploy more applications in the cluster and let Kubernetes figure out where (or if) they will fit.

Why not run your own NextCloud instance? Or maybe gitea? You could also set up a CI/CD pipeline for all those IoT containers. After all, why would you build and cross compile them on your main computer if you can do it natively in the cluster?

The point here is that Kubernetes makes it easier to make use of the “hidden” resources that you often end up with otherwise. Kubernetes handles scheduling of Pods in the cluster based on available resources and fault tolerance so that you don’t have to. However, in order to help Kubernetes make reasonable decisions you should definitely add resource requests to your workloads.

Summary

While Kubernetes, or container orchestration in general, may not usually be associated with IoT, it certainly makes a lot of sense to have an orchestrator when you are dealing with distributed systems. Not only does is allow you to handle a diverse and heterogeneous fleet of devices in a unified way, but it also simplifies communication between them. In addition, Kubernetes makes it easier to utilize spare resources.

Container technology made it possible to build applications that could “run anywhere”. Now Kubernetes makes it easier to manage the “anywhere” part. And as an immutable base to build it all on, we have Fedora IoT.

Posted on Leave a comment

Joe Doss: How Do You Fedora?

We recently interviewed Joe Doss on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

Who is Joe Doss?

Joe Doss lives in Chicago, Illinois USA and his favorite food is pizza. He is the Director of Engineering Operations and Kenna Security, Inc. Doss describes his employer this way: “Kenna uses data science to help enterprises combine their infrastructure and application vulnerability data with exploit intelligence to measure risk, predict attacks and prioritize remediation.”

His first Linux distribution was Red Hat Linux 5. A friend of his showed him a computer that wasn’t running Windows. Doss thought it was just a program to install on Windows when his friend gave him a Red Hat Linux 5 install disk. “I proceeded to install this Linux ‘program’ on my Father’s PC,” he says. Luckily for Doss, his father supported his interest in computers. “I ended up totally wiping out the Windows 95 install as a result and this was how I got my first computer.”

At Kenna, Doss’ group makes use of Fedora and Ansible: “We run Fedora Cloud in multiple VPC deployments in AWS and Google Compute with over 200 virtual machines. We use Ansible to automate everything we do with Fedora.”

Doss brews beer at home and contributes to open source in my free time. He also has a cat named Tibby. “I rescued Tibby off the street the Hyde Park neighborhood of Chicago when she was 7 months old. She is not very smart, but she makes up for that with cuteness.” His favorite place to visit is his childhood home of Michigan, but Doss says, “anywhere with a warm beach, a cool drink, and the ocean is pretty nice too.”

Tibby the cute cat!

The Fedora community

Doss became involved with Fedora and the Fedora community through his job at Kenna Security. When he first joined the company they were using Ubuntu and Chef in production. There was a desire to make the infrastructure more reproducible and reliable, and he says, “I was able to greenfield our deployments with Fedora Cloud and Ansible.” This project got him involved in the Fedora Cloud release.

When asked about his first impression of the Fedora community, Doss said, “Overwhelming to be honest. There is so much going on and it is hard to figure out who are the stakeholders of each part of Fedora.” Once he figured out who he needed to talk to he found the community very welcoming and super supportive.

One of the ideas he had to improve the community was to unite the various projects and team under on bug tracking tool and community resource. “Pagure, Bugzilla, Github, Fedora Forums, Discourse Forums, Mailing lists… it is all over the place and hard to navigate at first.” Despite the initial complexity of becoming familiar with the Fedora Project, Doss feels it is amazingly rewarding to be involved. “It feels awesome it to be apart of a Linux distro that impacts so many people in very positive ways. You can make a difference.”

Doss called out Dusty Mabe at Red Hat for helping him become involved, saying Dusty “has been an amazing mentor and resource for enabling me to contribute back to Fedora.”

Doss has an interesting way of explaining to non-technical friends what he does. “Imagine changing the tires on a very large bus while it is going down the highway at 70 MPH and sometimes you need to get involved with the tire manufacturer to help make this process work well.” This metaphor helps people understand what replacing 200-plus VMs across more than five production VPCs in AWS and Google Compute with every Fedora release.

Doss drew my attention to one specific incident with Fedora 29 and Vagrant. “Recently we encountered an issue where Vagrant wouldn’t set the hostname on a Fresh Fedora 29 Beta VM. This was due to Fedora 29 Cloud no longer shipping the network service stub in favor of NetworkManager. This led to me working with a colleague at Kenna Security to send a patch upstream to the Vagrant project to help their developers produce a fix for Fedora 29. Vagrant usage with Fedora is a very large part of our development cycle at Kenna, and having this broken before the Fedora 29 release would have impacted us a lot.” As Doss said, “Sometimes you need to help make the tires before they go on the bus.”

Doss is the COPR Fedora, RHEL, and CentOS package maintainer for WireGuard VPN. “The CentOS repo just went over 60 thousand downloads last month which is pretty awesome.”

What Hardware?

Doss uses Fedora 29 cloud in the over five VPC deployments in AWS and Google computer. At home he has a SuperMicro SYS-5019A-FTN4 1U Server that runs Fedora 29 Server with Openshift OKD installed on it. His laptops are all Lenovo. “For Laptops I use a ThinkPad T460s for work and a ThinkPad 25 at home. Both have Fedora 29 installed. ThinkPads are the best with Fedora.”

What Software?

Doss used GNOME 3 as his preferred desktop on Fedora Workstation. “I use Sublime Text 3 for my text editor on the desktop or vim on servers.” For development and testing he uses Vagrant. “Ansible is what I use for any kind of automation with Fedora. I maintain an Ansible playbook for setting up my workstation.”

Ansible

I asked Doss if he had advice for people trying to learn Ansible.

“Start small. Automate the stuff that makes your life easier, but don’t over complicate it. Ansible Galaxy is a great resource to get things done quickly, but if you truly want to learn how to use Ansible, writing your own roles and playbooks the path I would take.

“I have helped a lot of my coworkers that have joined my Operations team at Kenna get up to speed on using Ansible by buying them a copy of Ansible for Devops by Jeff Geerling. This book will give anyone new to Ansible the foundation they need to start using it everyday. #ansible on Freenode is a great resource as well along with the official Ansible docs.”

Doss also said, “Knowing what to automate is most likely the most difficult thing to master without over complicating things. Debugging complex playbooks and roles is a close second.”

Home lab

He recommended setting up a home lab. “At Kenna and at home I use Vagrant with the Vagrant-libvirt plugin for developing Ansible roles and playbooks. You can iterate quickly to build your roles and playbooks on your laptop with your favorite editor and run vagrant provision to run your playbook. Quick feedback loop and the ability to burn down your Vagrant VM and start over quickly is an amazing workflow. Below is a sample Vagrant file that I keep handy to spin up a Fedora VM to test my playbooks.”

-- mode: ruby --
vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.provision "shell", inline: "dnf install nfs-utils rpcbind @development-tools @ansible-node redhat-rpm-config gcc-c++ -y"
config.ssh.forward_agent = true
config.vm.define "f29", autostart: false do |f29|
f29.vm.box = "fedora/29-cloud-base"
f29.vm.hostname = "f29.example.com"
f29.vm.provider "libvirt" do |vm|
vm.memory = 2048
vm.cpus = 2
vm.driver = "kvm"
vm.nic_model_type = "e1000"
end
config.vm.synced_folder '.', '/vagrant', disabled: true

config.vm.provision "ansible" do |ansible|
ansible.groups = {
}
ansible.playbook = "playbooks/main.yml"
ansible.inventory_path = "inventory/development"
ansible.extra_vars = {
ansible_python_interpreter: "/usr/bin/python3"
}
# ansible.verbose = 'vvv' end
end
end
Posted on Leave a comment

Managing Partitions with sgdisk

Roderick W. Smith‘s sgdisk command can be used to manage the partitioning of your hard disk drive from the command line. The basics that you need to get started with it are demonstrated below.

The following six parameters are all that you need to know to make use of sgdisk’s most basic features:

  1. -p
    Print the partition table:
    # sgdisk -p /dev/sda
  2. -d x
    Delete partition x:
    # sgdisk -d 1 /dev/sda
  3. -n x:y:z
    Create a new partition numbered x, starting at y and ending at z:
    # sgdisk -n 1:1MiB:2MiB /dev/sda
  4. -c x:y
    Change the name of partition x to y:
    # sgdisk -c 1:grub /dev/sda
  5. -t x:y
    Change the type of partition x to y:
    # sgdisk -t 1:ef02 /dev/sda
  6. –list-types
    List the partition type codes:
    # sgdisk –list-types

The SGDisk Command

As you can see in the above examples, most of the commands require that the device file name of the hard disk drive to operate on be specified as the last parameter.

The parameters shown above can be combined so that you can completely define a partition with a single run of the sgdisk command:

# sgdisk -n 1:1MiB:2MiB -t 1:ef02 -c 1:grub /dev/sda

Relative values can be specified for some fields by prefixing the value with a + or symbol. If you use a relative value, sgdisk will do the math for you. For example, the above example could be written as:

# sgdisk -n 1:1MiB:+1MiB -t 1:ef02 -c 1:grub /dev/sda

The value 0 has a special-case meaning for several of the fields:

  • In the partition number field, 0 indicates that the next available number should be used (numbering starts at 1).
  • In the starting address field, 0 indicates that the start of the largest available block of free space should be used. Some space at the start of the hard drive is always reserved for the partition table itself.
  • In the ending address field, 0 indicates that the end of the largest available block of free space should be used.

By using 0 and relative values in the appropriate fields, you can create a series of partitions without having to pre-calculate any absolute values. For example, the following sequence of sgdisk commands would create all the basic partitions that are needed for a typical Linux installation if in run sequence against a blank hard drive:

# sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub /dev/sda
# sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot /dev/sda
# sgdisk -n 0:0:+4GiB -t 0:8200 -c 0:swap /dev/sda
# sgdisk -n 0:0:0 -t 0:8300 -c 0:root /dev/sda

The above example shows how to partition a hard disk for a BIOS-based computer. The grub partition is not needed on a UEFI-based computer. Because sgdisk is calculating all the absolute values for you in the above example, you can just skip running the first command on a UEFI-based computer and the remaining commands can be run without modification. Likewise, you could skip creating the swap partition and the remaining commands would not need to be modified.

There is also a short-cut for deleting all the partitions from a hard disk with a single command:

# sgdisk –zap-all /dev/sda

For the most up-to-date and detailed information, check the man page:

$ man sgdisk
Posted on Leave a comment

InitRAMFS, Dracut, and the Dracut Emergency Shell

The Linux startup process goes through several stages before reaching the final graphical or multi-user target. The initramfs stage occurs just before the root file system is mounted. Dracut is a tool that is used to manage the initramfs. The dracut emergency shell is an interactive mode that can be initiated while the initramfs is loaded.

This article will show how to use the dracut command to modify the initramfs. Some basic troubleshooting commands that can be run from the dracut emergency shell will also be demonstrated.

The InitRAMFS

Initramfs stands for Initial Random-Access Memory File System. On modern Linux systems, it is typically stored in a file under the /boot directory. The kernel version for which it was built will be included in the file name. A new initramfs is generated every time a new kernel is installed.

A Linux Boot Directory

By default, Fedora keeps the previous two versions of the kernel and its associated initramfs. This default can be changed by modifying the value of the installonly_limit setting the /etc/dnf/dnf.conf file.

You can use the lsinitrd command to list the contents of your initramfs archive:

The LsInitRD Command

The above screenshot shows that my initramfs archive contains the nouveau GPU driver. The modinfo command tells me that the nouveau driver supports several models of NVIDIA video cards. The lspci command shows that there is an NVIDIA GeForce video card in my computer’s PCI slot. There are also several basic Unix commands included in the archive such as cat and cp.

By default, the initramfs archive only includes the drivers that are needed for your specific computer. This allows the archive to be smaller and decreases the time that it takes for your computer to boot.

The Dracut Command

The dracut command can be used to modify the contents of your initramfs. For example, if you are going to move your hard drive to a new computer, you might want to temporarily include all drivers in the initramfs to be sure that the operating system can load on the new computer. To do so, you would run the following command:

# dracut –force –no-hostonly

The force parameter tells dracut that it is OK to overwrite the existing initramfs archive. The no-hostonly parameter overrides the default behavior of including only drivers that are germane to the currently-running computer and causes dracut to instead include all drivers in the initramfs.

By default dracut operates on the initramfs for the currently-running kernel. You can use the uname command to display which version of the Linux kernel you are currently running:

$ uname -r
5.0.5-200.fc29.x86_64

Once you have your hard drive installed and running in your new computer, you can re-run the dracut command to regenerate the initramfs with only the drivers that are needed for the new computer:

# dracut –force

There are also parameters to add arbitrary drivers, dracut modules, and files to the initramfs archive. You can also create configuration files for dracut and save them under the /etc/dracut.conf.d directory so that your customizations will be automatically applied to all new initramfs archives that are generated when new kernels are installed. As always, check the man page for the details that are specific to the version of dracut you have installed on your computer:

$ man dracut

The Dracut Emergency Shell

The Dracut Emergency Shell

Sometimes something goes wrong during the initramfs stage of your computer’s boot process. When this happens, you will see “Entering emergency mode” printed to the screen followed by a shell prompt. This gives you a chance to try and fix things up manually and continue the boot process.

As a somewhat contrived example, let’s suppose that I accidentally deleted an important kernel parameter in my boot loader configuration:

# sed -i ‘s/ rd.lvm.lv=fedora\/root / /’ /boot/grub2/grub.cfg

The next time I reboot my computer, it will seem to hang for several minutes while it is trying to find the root partition and eventually give up and drop to an emergency shell.

From the emergency shell, I can enter journalctl and then use the Space key to page down though the startup logs. Near the end of the log I see a warning that reads “/dev/mapper/fedora-root does not exist”. I can then use the ls command to find out what does exist:

# ls /dev/mapper
control  fedora-swap

Hmm, the fedora-root LVM volume appears to be missing. Let’s see what I can find with the lvm command:

# lvm lvscan
  ACTIVE            ‘/dev/fedora/swap’ [3.85 GiB] inherit
  inactive          ‘/dev/fedora/home’ [22.85 GiB] inherit
  inactive          ‘/dev/fedora/root’ [46.80 GiB] inherit

Ah ha! There’s my root partition. It’s just inactive. All I need to do is activate it and exit the emergency shell to continue the boot process:

# lvm lvchange -a y fedora/root
# exit
The Fedora Login Screen

The above example only demonstrates the basic concept. You can check the troubleshooting section of the dracut guide for a few more examples.

It is possible to access the dracut emergency shell manually by adding the rd.break parameter to your kernel command line. This can be useful if you need to access your files before any system services have been started.

Check the dracut.kernel man page for details about what kernel options your version of dracut supports:

$ man dracut.kernel