Posted on Leave a comment

How RPM packages are made: the source RPM

In a previous post, we looked at what RPM packages are. They are archives that contain files and metadata. This metadata tells RPM where to create or remove files from when an RPM is installed or uninstalled. The metadata also contains information on “dependencies”, which you will remember from the previous post, can either be “runtime” or “build time”.

As an example, we will look at fpaste. You can download the RPM using dnf. This will download the latest version of fpaste that is available in the Fedora repositories. On Fedora 30, this is currently 0.3.9.2:

$ dnf download fpaste ...
fpaste-0.3.9.2-2.fc30.noarch.rpm

Since this is the built RPM, it contains only files needed to use fpaste:

$ rpm -qpl ./fpaste-0.3.9.2-2.fc30.noarch.rpm
/usr/bin/fpaste
/usr/share/doc/fpaste
/usr/share/doc/fpaste/README.rst
/usr/share/doc/fpaste/TODO
/usr/share/licenses/fpaste
/usr/share/licenses/fpaste/COPYING
/usr/share/man/man1/fpaste.1.gz

Source RPMs

The next link in the chain is the source RPM. All software in Fedora must be built from its source code. We do not include pre-built binaries. So, for an RPM file to be made, RPM (the tool) needs to be:

  • given the files that have to be installed,
  • told how to generate these files, if they are to be compiled, for example,
  • told where these files must be installed,
  • what other dependencies this particular software needs to work properly.

The source RPM holds all of this information. Source RPMs are similar archives to RPM, but as the name suggests, instead of holding the built binary files, they contain the source files for a piece of software. Let’s download the source RPM for fpaste:

$ dnf download fpaste --source
...
fpaste-0.3.9.2-2.fc30.src.rpm

Notice how the file ends with “src.rpm”. All RPMs are built from source RPMs. You can easily check what source RPM a “binary” RPM comes from using dnf too:

$ dnf repoquery --qf "%{SOURCERPM}" fpaste
fpaste-0.3.9.2-2.fc30.src.rpm

Also, since this is the source RPM, it does not contain built files. Instead, it contains the sources and instructions on how to build the RPM from them:

$ rpm -qpl ./fpaste-0.3.9.2-2.fc30.src.rpm
fpaste-0.3.9.2.tar.gz
fpaste.spec

Here, the first file is simply the source code for fpaste. The second is the “spec” file. The spec file is the recipe that tells RPM (the tool) how to create the RPM (the archive) using the sources contained in the source RPM—all the information that RPM (the tool) needs to build RPMs (the archives) are contained in spec files. When we package maintainers add software to Fedora, most of our time is spent writing and perfecting the individual spec files. When a software package needs an update, we go back and tweak the spec file. You can see the spec files for ALL packages in Fedora at our source repository at https://src.fedoraproject.org/browse/projects/

Note that one source RPM may contain the instructions to build multiple RPMs. fpaste is a very simple piece of software, where one source RPM generates one “binary” RPM. Python, on the other hand is more complex. While there is only one source RPM, it generates multiple binary RPMs:

$ sudo dnf repoquery --qf "%{SOURCERPM}" python3
python3-3.7.3-1.fc30.src.rpm
python3-3.7.4-1.fc30.src.rpm $ sudo dnf repoquery --qf "%{SOURCERPM}" python3-devel
python3-3.7.3-1.fc30.src.rpm
python3-3.7.4-1.fc30.src.rpm $ sudo dnf repoquery --qf "%{SOURCERPM}" python3-libs
python3-3.7.3-1.fc30.src.rpm
python3-3.7.4-1.fc30.src.rpm $ sudo dnf repoquery --qf "%{SOURCERPM}" python3-idle
python3-3.7.3-1.fc30.src.rpm
python3-3.7.4-1.fc30.src.rpm $ sudo dnf repoquery --qf "%{SOURCERPM}" python3-tkinter
python3-3.7.3-1.fc30.src.rpm
python3-3.7.4-1.fc30.src.rpm

In RPM jargon, “python3” is the “main package”, and so the spec file will be called “python3.spec”. All the other packages are “sub-packages”. You can download the source RPM for python3 and see what’s in it too. (Hint: patches are also part of the source code):

$ dnf download --source python3
python3-3.7.4-1.fc30.src.rpm $ rpm -qpl ./python3-3.7.4-1.fc30.src.rpm
00001-rpath.patch
00102-lib64.patch
00111-no-static-lib.patch
00155-avoid-ctypes-thunks.patch
00170-gc-assertions.patch
00178-dont-duplicate-flags-in-sysconfig.patch
00189-use-rpm-wheels.patch
00205-make-libpl-respect-lib64.patch
00251-change-user-install-location.patch
00274-fix-arch-names.patch
00316-mark-bdist_wininst-unsupported.patch
Python-3.7.4.tar.xz
check-pyc-timestamps.py
idle3.appdata.xml
idle3.desktop
python3.spec

Building an RPM from a source RPM

Now that we have the source RPM, and know what’s in it, we can rebuild our RPM from it. Before we do so, though, we should set our system up to build RPMs. First, we install the required tools:

$ sudo dnf install fedora-packager

This will install the rpmbuild tool. rpmbuild requires a default layout so that it knows where each required component of the source rpm is. Let’s see what they are:

# Where should the spec file go?
$ rpm -E %{_specdir}
/home/asinha/rpmbuild/SPECS # Where should the sources go?
$ rpm -E %{_sourcedir}
/home/asinha/rpmbuild/SOURCES # Where is temporary build directory?
$ rpm -E %{_builddir}
/home/asinha/rpmbuild/BUILD # Where is the buildroot?
$ rpm -E %{_buildrootdir}
/home/asinha/rpmbuild/BUILDROOT # Where will the source rpms be?
$ rpm -E %{_srcrpmdir}
/home/asinha/rpmbuild/SRPMS # Where will the built rpms be?
$ rpm -E %{_rpmdir}
/home/asinha/rpmbuild/RPMS

I have all of this set up on my system already:

$ cd
$ tree -L 1 rpmbuild/
rpmbuild/
├── BUILD
├── BUILDROOT
├── RPMS
├── SOURCES
├── SPECS
└── SRPMS 6 directories, 0 files

RPM provides a tool that sets it all up for you too:

$ rpmdev-setuptree

Then we ensure that we have all the build dependencies for fpaste installed:

sudo dnf builddep fpaste-0.3.9.2-3.fc30.src.rpm

For fpaste you only need Python and that must already be installed on your system (dnf uses Python too). The builddep command can also be given a spec file instead of an source RPM. Read more in the man page:

$ man dnf.plugin.builddep

Now that we have all that we need, building an RPM from a source RPM is as simple as:

$ rpmbuild --rebuild fpaste-0.3.9.2-3.fc30.src.rpm
..
.. $ tree ~/rpmbuild/RPMS/noarch/
/home/asinha/rpmbuild/RPMS/noarch/
└── fpaste-0.3.9.2-3.fc30.noarch.rpm 0 directories, 1 file

rpmbuild will install the source RPM and build your RPM from it. You can now install the RPM to use it as you do–using dnf. Of course, as said before, if you want to change anything in the RPM, you must modify the spec file—we’ll cover spec files in next post.

Summary

To summarise this post in two short points:

  • the RPMs we generally install to use software are “binary” RPMs that contain built versions of the software
  • these are built from source RPMs that include the source code and the spec file that are needed to generate the binary RPMs.

If you’d like to get started with building RPMs, and help the Fedora community maintain the massive amount of software we provide, you can start here: https://fedoraproject.org/wiki/Join_the_package_collection_maintainers

For any queries, post to the Fedora developers mailing list—we’re always happy to help!

Posted on Leave a comment

Managing credentials with KeePassXC

A previous article discussed password management tools that use server-side technology. These tools are very interesting and suitable for a cloud installation.
In this article we will talk about KeePassXC, a simple multi-platform open source software that uses a local file as a database.
The main advantage of this type of password management is simplicity. No server-side technology expertise is required and can therefore be used by any type of user.

Introducing KeePassXC

KeePassXC is an open source cross platform password manager: its development started as a fork of KeePassX, a good product but with a not very active development. It saves the secrets in an encrypted database with AES algorithm using 256 bit key, this makes it reasonably safe to save the database in a cloud drive storage such as pCloud or Dropbox.

In addition to the passwords, KeePassXC allows you to save various information and attachments in the encrypted wallet. It also has a valid password generator that helps the user to correctly manage his credentials.

Installation

The program is available both in the standard Fedora repository and in the Flathub repository. Unfortunately the integration with the browser does not work with the application running in the sandbox, so I suggest to install the program via dnf:

 
sudo dnf install keepassxc

Creating your wallet

To create a new database there are two important steps:

  • Choose the encryption settings: the default settings are reasonably safe, increasing the transform rounds also increases the decryption time.
  • Choose the master key and additional protections: the master key must be easy to remember (if you lose it your wallet is lost!) but strong enough, a passphrase with at least 4 random words can be a good choice. As additional protection you can choose a key file (remember: you must always have it available otherwise you cannot open the wallet) and / or a YubiKey hardware key.

The database file will be saved to the file system. If you want to share with other computers / devices you can save it on a USB key or in a cloud storage like pCloud or Dropbox. Of course, if you choose a cloud storage, a particularly strong master password is recommended, better if accompanied by additional protection.

Creating your first entry

Once the database has been created, you can start creating your first entry. For a web login specify a username, password and url in the Entry tab. Optionally you can specify an expiration date for the credentials based on your personal policy: also by pressing the button on the right the favicon of the site is downloaded and associated as an icon of the entry, this is a nice feature.

KeePassXC also offers a good password / passphrase generator, you can choose length and complexity and check the degree of resistance to a brute force attack:

Browser integration

KeePassXC has an extension available for all major browsers. The extension allows you to fill in the login information for all the entries whose URL is specified.

Browser integration must be enabled on KeePassXC (Tools menu -> Settings) specifying which browsers you intend to use:

Once the extension is installed, it is necessary to create a connection with the database. To do this, press the extension button and then the Connect button: if the database is open and unlocked the extension will create an association key and save it in the database, the key is unique to the browser so I suggest naming it appropriately :

When you reach the login page specified in the Url field and the database is unlocked, the extension will offer you all the credentials you have associated with that page:

In this way, browsing with KeePassXC running you will have your internet credentials available without necessarily saving them in the browser.

SSH agent integration

Another interesting feature of KeePassXC is the integration with SSH. If you have ssh-agent running KeePassXC is able to interact and add the ssh keys that you have uploaded as attachments to your entries.

First of all in the general settings (Tools menu -> Settings) you have to enable the ssh agent and restart the program:

At this point it is required to upload your ssh key pair as an attachment to your entry. Then in the “SSH agent” tab select the private key in the attachment drop-down list, the public key will be populated automatically. Don’t forget to select the two checkboxes above to allow the key to be added to the agent when the database is opened / unlocked and removed when the database is closed / locked:

Now with the database open and unlocked you can log in ssh using the keys saved in your wallet.

The only limitation is in the maximum number of keys that can be added to the agent: ssh servers do not accept by default more than 5 login attempts, for security reasons it is not recommended to increase this value.

Posted on Leave a comment

Getting Started with Go on Fedora

The Go programming language was first publicly announced in 2009, since then the language has become widely adopted. In particular Go has become a reference in the world of cloud infrastructure with big projects like Kubernetes, OpenShift or Terraform for example.

Some of the main reasons for Go’s increasing popularity are the performances, the ease to write fast concurrent application, the simplicity of the language and fast compilation time. So let’s see how to get started with Go on Fedora.

Install Go in Fedora

Fedora provides an easy way to install the Go programming language via the official repository.

$ sudo dnf install -y golang
$ go version
go version go1.12.7 linux/amd64

Now that Go is installed, let’s write a simple program, compile it and execute it.

First program in Go

Let’s write the traditional “Hello, World!” program in Go. First create a main.go file and type or copy the following.

package main import "fmt" func main() { fmt.Println("Hello, World!")
}

Running this program is quite simple.

$ go run main.go
Hello, World!

This will build a binary from main.go in a temporary directory, execute the binary, then delete the temporary directory. This command is really great to quickly run the program during development and it also highlights the speed of Go compilation.

Building an executable of the program is as simple as running it.

$ go build main.go
$ ./main
Hello, World!

Using Go modules

Go 1.11 and 1.12 introduce preliminary support for modules. Modules are a solution to manage application dependencies. This solution is based on 2 files go.mod and go.sum used to explicitly define the version of the dependencies.

To show how to use modules, let’s add a dependency to the hello world program.

Before changing the code, the module needs to be initialized.

$ go mod init helloworld
go: creating new go.mod: module helloworld
$ ls
go.mod main main.go

Next modify the main.go file as follow.

package main import "github.com/fatih/color" func main () { color.Blue("Hello, World!")
}

In the modified main.go, instead of using the standard library “fmt” to print the “Hello, World!”. The application uses an external library which makes it easy to print text in color.

Let’s run this version of the application.

$ go run main.go
Hello, World! 

Now that the application is depending on the github.com/fatih/color library, it needs to download all the dependencies before compiling it. The list of dependencies is then added to go.mod and the exact version and commit hash of these dependencies is recorded in go.sum.

Posted on Leave a comment

Command line quick tips: Searching with grep

If you use your Fedora system for more than just browsing the web, you have probably needed to search for text in your files. For instance, you might be a developer that can’t remember where you left some code snippet. Or you might be looking for a setting stored in your system configuration files. Whatever the reason, there are plenty of ways to search for text on your Fedora system. This article will show you how, including using the built-in utility grep.

Introducing grep

The grep utility allows you to search for text, or more specifically text patterns, on your file system. The name grep comes from global regular expression print. Yikes, what a mouthful! This is because a regular expression (or regex) is a way of defining text patterns.

The grep utility lets you find and print out matches on these patterns — thus the name. It’s a powerful system, and you can even find it in modern code editors like Visual Studio Code or Atom.

Regular expressions

Harnessing all the power of regular expressions is a topic bigger than this article, for sure. The simplest kind of regex can be just a word, or a portion of a word. That pattern is simply “the following characters, in the same order.” The pattern is searched line by line. For example:

  • pciutil – matches any time the 7 characters pciutil appear together — including pciutil, pciutils, pciutil123, and foopciutil.
  • ^pciutil – matches any time the 7 characters pciutil appear together immediately at the beginning of a line (that’s what the ^ stands for)
  • pciutil$ – matches any time the 7 characters pciutil appear together immediately before the end of a line (that’s what the $ stands for)

More complicated expressions are also possible. Special characters are used in a regex as wildcards, or to change the way the regex works. If you want to match on one of these characters, use a \ (backslash) before the character.

For instance, the . (period or full stop) is a wildcard that matches any single character. If you use it in the expression pci.til, it matches pciutil, pci4til, or pci!til, but does not match pcitil. There must be a character to match the . in the regular expression.

The ? is a marker in a regex that marks the previous element as optional. So if you built on the previous example, the expression pci.?til would also match on pcitil because there need not be a character between i and t for a valid match.

The + and * are markers that stand for repetition. While + stands for one or more of the previous element, * stands for zero or more. So the regex pci.+til would match any of these: pciutil, pci4til, pci!til, pciuuuuuutil, pci423til. However, it wouldn’t match pcitil — but the regex pci.*til would.

Examples of grep

Now that you know a little about regex, let’s put it to work. Imagine that you’re trying to find a configuration file that mentions a user account jpublic. You tried a bunch of files already, but none were the correct one, and you’re sure it’s there. So, try searching the /etc folder (using sudo because some subfolders are not readable outside the root account):

$ sudo grep -r jpublic /etc/

The -r switch searches the folder recursively. The utility prints a list of matching files, and the line where the hit occurred. In most modern terminal environments, the hit is color highlighted for better readability.

Imagine you have a much larger selection of files in /home/shared and you need to establish which ones mention the name MacNulty. However, you’re not sure whether the capitalization will be consistent, and you’re just looking for names of files, not the context. Also, you believe someone may have misspelled the name as McNulty in some places.

Use the -l switch to only output filenames with a match, a ? marker for optional a in the name, and -i to make the search case-insensitive:

$ sudo grep -irl 'ma\?cnulty' /home/shared

This command will match on strings like Macnulty, McNulty, Mcnulty, and macNulty with no problem. You’ll get a simple list of filenames where the match was found in the contents.

These are only the simplest ways to use grep and regular expressions. You can learn a lot more about both using the info grep command.

But wait, there’s more…

The grep command is venerable but in some situations may not be as efficient as newer search utilities. For instance, the ripgrep utility is engineered to be a fast search utility that can take the place of grep. We covered ripgrep as part of an article on Rust and Rust applications previously in the Magazine:

It’s important to note that ripgrep has its own command line switches and syntax. For example, it has simple switches to print only filename matches, invert searches, and many other useful functions. It can also ignore based on .rgignore files placed in any subdirectories. (It’s also noteworthy that the -r switch is used differently for ripgrep, because it is automatically recursive.)

To install, use this command:

$ sudo dnf install ripgrep

To explore the options, use the manual page (man rg). You’ll find that many, but not all, options are the same as grep.

Have fun searching!


Posted on Leave a comment

Cockpit and the evolution of the Web User Interface

Over 3 years ago the Fedora Magazine published an article entitled Cockpit: an overview. Since then, the interface has see some eye-catching changes. Today’s Cockpit is cleaner and the larger fonts makes better use of screen real-estate.

This article will go over some of the changes made to the UI. It will also explore some of the general tools available in the web interface to simplify those monotonous sysadmin tasks.

Cockpit installation

Cockpit can be installed using the dnf install cockpit command. This provides a minimal setup providing the basic tools required to use the interface.

Another option is to install the Headless Management group. This will install additional packages used to extend the usability of Cockpit. It includes extensions for NetworkManager, software packages, disk, and SELinux management.

Run the following commands to enable the web service on boot and open the firewall port:

$ sudo systemctl enable --now cockpit.socket
Created symlink /etc/systemd/system/sockets.target.wants/cockpit.socket -> /usr/lib/systemd/system/cockpit.socket $ sudo firewall-cmd --permanent --add-service cockpit
success
$ sudo firewall-cmd --reload
success

Logging into the web interface

To access the web interface, open your favourite browser and enter the server’s domain name or IP in the address bar followed by the service port (9090). Because Cockpit uses HTTPS, the installation will create a self-signed certificate to encrypt passwords and other sensitive data. You can safely accept this certificate, or request a CA certificate from your sysadmin or a trusted source.

Once the certificate is accepted, the new and improved login screen will appear. Long-time users will notice the username and password fields have been moved to the top. In addition, the white background behind the credential fields immediately grabs the user’s attention.

A feature added to the login screen since the previous article is logging in with sudo privileges — if your account is a member of the wheel group. Check the box beside Reuse my password for privileged tasks to elevate your rights.

Another edition to the login screen is the option to connect to remote servers also running the Cockpit web service. Click Other Options and enter the host name or IP address of the remote machine to manage it from your local browser.

Home view

Right off the bat we get a basic overview of common system information. This includes the make and model of the machine, the operating system, if the system is up-to-date, and more.

Clicking the make/model of the system displays hardware information such as the BIOS/Firmware. It also includes details about the components as seen with lspci.

Clicking on any of the options to the right will display the details of that device. For example, the % of CPU cores option reveals details on how much is used by the user and the kernel. In addition, the Memory & Swap graph displays how much of the system’s memory is used, how much is cached, and how much of the swap partition active. The Disk I/O and Network Traffic graphs are linked to the Storage and Networking sections of Cockpit. These topics will be revisited in an upcoming article that explores the system tools in detail.

Secure Shell Keys and authentication

Because security is a key factor for sysadmins, Cockpit now has the option to view the machine’s MD5 and SHA256 key fingerprints. Clicking the Show fingerprints options reveals the server’s ECDSA, ED25519, and RSA fingerprint keys.

You can also add your own keys by clicking on your username in the top-right corner and selecting Authentication. Click on Add keys to validate the machine on other systems. You can also revoke your privileges in the Cockpit web service by clicking on the X button to the right.

Changing the host name and joining a domain

Changing the host name is a one-click solution from the home page. Click the host name currently displayed, and enter the new name in the Change Host Name box. One of the latest features is the option to provide a Pretty name.

Another feature added to Cockpit is the ability to connect to a directory server. Click Join a domain and a pop-up will appear requesting the domain address or name, organization unit (optional), and the domain admin’s credentials. The Domain Membership group provides all the packages required to join an LDAP server including FreeIPA, and the popular Active Directory.

To opt-out, click on the domain name followed by Leave Domain. A warning will appear explaining the changes that will occur once the system is no longer on the domain. To confirm click the red Leave Domain button.

Configuring NTP and system date and time

Using the command-line and editing config files definitely takes the cake when it comes to maximum tweaking. However, there are times when something more straightforward would suffice. With Cockpit, you have the option to set the system’s date and time manually or automatically using NTP. Once synchronized, the information icon on the right turns from red to blue. The icon will disappear if you manually set the date and time.

To change the timezone, type the continent and a list of cities will populate beneath.

Shutting down and restarting

You can easily shutdown and restart the server right from home screen in Cockpit. You can also delay the shutdown/reboot and send a message to warn users.

Configuring the performance profile

If the tuned and tuned-utils packages are installed, performance profiles can be changed from the main screen. By default it is set to a recommended profile. However, if the purpose of the server requires more performance, we can change the profile from Cockpit to suit those needs.

Terminal web console

A Linux sysadmin’s toolbox would be useless without access to a terminal. This allows admins to fine-tune the server beyond what’s available in Cockpit. With the addition of themes, admins can quickly adjust the text and background colours to suit their preference.

Also, if you type exit by mistake, click the Reset button in the top-right corner. This will provide a fresh screen with a flashing cursor.

Adding a remote server and the Dashboard overlay

The Headless Management group includes the Dashboard module (cockpit-dashboard). This provides an overview the of the CPU, memory, network, and disk performance in a real-time graph. Remote servers can also be added and managed through the same interface.

For example, to add a remote computer in Dashboard, click the + button. Enter the name or IP address of the server and select the colour of your choice. This helps to differentiate the stats of the servers in the graph. To switch between servers, click on the host name (as seen in the screen-cast below). To remove a server from the list, click the check-mark icon, then click the red trash icon. The example below demonstrates how Cockpit manages a remote machine named server02.local.lan.

Documentation and finding help

As always, the man pages are a great place to find documentation. A simple search in the command-line results with pages pertaining to different aspects of using and configuring the web service.

$ man -k cockpit
cockpit (1) - Cockpit
cockpit-bridge (1) - Cockpit Host Bridge
cockpit-desktop (1) - Cockpit Desktop integration
cockpit-ws (8) - Cockpit web service
cockpit.conf (5) - Cockpit configuration file

The Fedora repository also has a package called cockpit-doc. The package’s description explains it best:

The Cockpit Deployment and Developer Guide shows sysadmins how to deploy Cockpit on their machines as well as helps developers who want to embed or extend Cockpit.

For more documentation visit https://cockpit-project.org/external/source/HACKING

Conclusion

This article only touches upon some of the main functions available in Cockpit. Managing storage devices, networking, user account, and software control will be covered in an upcoming article. In addition, optional extensions such as the 389 directory service, and the cockpit-ostree module used to handle packages in Fedora Silverblue.

The options continue to grow as more users adopt Cockpit. The interface is ideal for admins who want a light-weight interface to control their server(s).

What do you think about Cockpit? Share your experience and ideas in the comments below.

Posted on Leave a comment

Taz Brown: How Do You Fedora?

We recently interviewed Taz Brown on how she uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

Taz Brown is a seasoned IT professional with over 15 years of experience. “I have worked as a systems administrator, senior Linux administrator, DevOps engineer and I now work as a senior Ansible automation consultant at Red Hat with the Automation Practice Team.” Originally Taz started using Ubuntu, but she started using CentOS, Red Hat Enterprise Linux and Fedora as a Linux administrator in the IT industry.

Taz is relatively new to contributing to open source, but she found that code was not the only way to contribute. “I prefer to contribute through documentation as I am not a software developer or engineer. I found that there was more than one way to contribute to open source than just through code.”

All about Taz

Her childhood hero is Wonder Woman. Her favorite movie is Hackers. “My favorite scene is the beginning of the movie,” Taz tells the Magazine. “The movie starts with a group of special agents breaking into a house to catch the infamous hacker, Zero Cool. We soon discover that Zero Cool is actually 11-year-old Dade Murphy, who managed to crash 1,507 computer systems in one day. He is charged for his crimes and his family is fined $45,000. Additionally, he is banned from using computers or touch-tone telephones until he is 18.”

Her favorite character in the movie is Paul Cook. “Paul Cook, Lord Nikon, played by Laurence Mason was my favorite character. One of the main reasons is that I never really saw a hacker movie that had characters that looked like me so I was fascinated by his portrayal. He was enigmatic. It was refreshing to see and it made me real proud that I was passionate about IT and that I was a geek of sorts.”

Taz is an amateur photographer and uses a Nikon D3500. “I definitely like vintage things so I am looking to add a new one to my collection soon.” She also enjoys 3D printing, and drawing. “I use open source tools in my hobbies such as Wekan, which is an open-source kanban utility.”

Taz Brown with Astronaut

The Fedora community

Taz first started using Linux about 8 years ago. “I started using Ubuntu and then graduated to Fedora and its community and I was hooked. I have been using Fedora now for about 5 years.”

When she became a Linux Administrator, Linux turned into a passion. “I was trying to find my way in terms of contributing to open source. I didn’t know where to go so I wondered if I could truly be an open source enthusiast and influencer because the community is so vast, but once I found a few people who embraced my interests and could show me the way, I was able to open up and ask questions and learn from the community.”

Taz first became involved with the Fedora community through her work as a Linux systems engineer while working at Mastercard. “My first impressions of the Fedora community was one of true collaboration, respect and sharing.”

When Brown talked about the Fedora Project she gave an excellent analogy. “America is an melting pot and that’s how I see open source projects like the Fedora Project. There is plenty of room for diverse contributions to the Fedora Project. There are so many ways in which to get and stay involved and there is also room for new ideas.”

When we asked Brown about what she would like to see improved in the Fedora community, she commented on making others more aware of the opportunities. “I wish those who are typically underrepresented in tech were more aware of the amazing commitment that the Fedora Project has to diversity and inclusion in open source and in the Fedora community.”

Next Taz had some advice for people looking to join the Fedora Community. “It’s a great decision and one that you likely will not regret joining. Fedora is a project with a very large supportive community and if you’re new to open source, it’s definitely a great place to start. There is a lot of cool stuff in Fedora. I believe there are limitless opportunities for The Fedora Project.”

What hardware?

Taz uses an Lenovo Thinkserver TS140 with 64 GB of ram, 4 1 TB SSDs and a 1 TB HD for data storage. The server is currently running Fedora 30. She also has a Synology NAS with 164 TB of storage using a RAID 5 configuration. Taz also has a Logitech MX Master and MX Master 2S. “For my keyboard, I use a Kinesis Advantage 2.” She also uses two 38 inch LG ultrawide curved monitors and a single 34 inch LG ultrawide monitor.

She owns a System76 laptop. “I use the 16.1-inch Oryx Pro by System76 with IPS Display with i7 processor with 6 cores and 12 threads.” It has 6 GB GDDR6 RTX 2060 w/ 1920 CUDA Cores and also 64 GB of DDR4 RAM and a total of 4 TB of SSD storage. “I love the way Fedora handles my peripherals and like my mouse and keyboard. Everything works seamlessly. Plug and play works as it should and performance never suffers.”

Amazing Monitor Setup

What software?

Brown is currently running Fedora 30. She has a variety of software in her everyday work flow. “I use Wekan, which is an open-source kanban, which I use to manage my engagements and projects. My favorite editor is Atom, though I use to use Sublime at one point in time.”

And as for terminals? “I use Terminator as my go-to terminal because of grid arrangement as well as it’s many keyboard shortcuts and its tab formation.” Taz continues, “I love using neofetch which comes up with a nifty fedora logo and system information every time I log in to the terminal. I also have my terminal pimped out using powerline and powerlevel9k and vim-powerline as well.”

Taz Brown screenshot of Linux terminal.

Posted on Leave a comment

Use a drop-down terminal for fast commands in Fedora

A drop-down terminal lets you tap a key and quickly enter any command on your desktop. Often it creates a terminal in a smooth way, sometimes with effects. This article demonstrates how it helps to improve and speed up daily tasks, using drop-down terminals like Yakuake, Tilda, Guake and a GNOME extension.

Yakuake

Yakuake is a drop-down terminal emulator based on KDE Konsole techonology. It is distributed under the terms of the GNU GPL Version 2. It includes features such as:

  • Smoothly rolls down from the top of your screen
  • Tabbed interface
  • Configurable dimensions and animation speed
  • Skinnable
  • Sophisticated D-Bus interface

To install Yakuake, use the following command:

$ sudo dnf install -y yakuake

Startup and configuration

If you’re runnign KDE, open the System Settings and go to Startup and Shutdown. Add yakuake to the list of programs under Autostart, like this:

It’s easy to configure Yakuake while running the app. To begin, launch the program at the command line:

$ yakuake &

The following welcome dialog appears. You can set a new keyboard shortcut if the standard one conflicts with another keystroke you already use:

Now click the menu button, and the following help menu appears. Next, select Configure Yakuake… to access the configuration options.

You can customize the options for appearance, such as opacity; behavior, such as focusing terminals when the mouse pointer is moved over them; and window, such as size and animation. In the window options you’ll find one of the most useful options is you use two or more monitors: Open on screen: At mouse location.

Using Yakuake

The main shortcuts are:

  • F12 = Open/Retract Yakuake
  • Ctrl+F11 = Full Screen Mode
  • Ctrl+) = Split Top/Bottom
  • Ctrl+( = Split Left/Right
  • Ctrl+Shift+T = New Session
  • Shift+Right = Next Session
  • Shift+Left = Previous Session
  • Ctrl+Alt+S = Rename Session

Below is an example of Yakuake being used to split the session like a terminal multiplexer. Using this feature, you can run several shells in one session.

Tilda

Tilda is a drop-down terminal that compares with other popular terminal emulators such as GNOME Terminal, KDE’s Konsole, xterm, and many others.

It features a highly configurable interface. You can even change options such as the terminal size and animation speed. Tilda also lets you enable hotkeys you can bind to commands and operations.

To install Tilda, run this command:

$ sudo dnf install -y tilda

Startup and configuration

Most users prefer to have a drop-down terminal available behind the scenes when they login. To set this option, first go to the app launcher in your desktop, search for Tilda, and open it.

Next, open up the Tilda Config window. Select Start Tilda hidden, which means it will not display a terminal immediately when started.

Next, you’ll set your desktop to start Tilda automatically. If you’re using KDE, go to System Settings > Startup and Shutdown > Autostart and use Add a Program.

If you’re using GNOME, you can run this command in a terminal:

$ ln -s /usr/share/applications/tilda.desktop ~/.config/autostart/

When you run for the first time, a wizard shows up to set your preferences. If you need to change something, right click and go to Preferences in the menu.

You can also create multiple configuration files, and bind other keys to open new terminals at different places on the screen. To do that, run this command:

$ tilda -C

Every time you use the above command, Tilda creates a new config file located in the ~/.config/tilda/ folder called config_0, config_1, and so on. You can then map a key combination to open a new Tilda terminal with a specific set of options.

Using Tilda

The main shortcuts are:

  • F1 = Pull Down Terminal Tilda (Note: If you have more than one config file, the shortcuts are the same, with a diferent open/retract shortcut like F1, F2, F3, and so on)
  • F11 = Full Screen Mode
  • F12 = Toggle Transparency
  • Ctrl+Shift+T = Add Tab
  • Ctrl+Page Up = Go to Next Tab
  • Ctrl+Page Down = Go to Previous Tab

GNOME Extension

The Drop-down Terminal GNOME Extension lets you use this useful tool in your GNOME Shell. It is easy to install and configure, and gives you fast access to a terminal session.

Installation

Open a browser and go to the site for this GNOME extension. Enable the extension setting to On, as shown here:

Then select Install to install the extension on your system.

Once you do this, there’s no reason to set any autostart options. The extension will automatically run whenever you login to GNOME!

Configuration

After install, the Drop Down Terminal configuration window opens to set your preferences. For example, you can set the size of the terminal, animation, transparency, and scrollbar use.

If you need change some preferences in the future, run the gnome-shell-extension-prefs command and choose Drop Down Terminal.

Using the extension

The shortcuts are simple:

  • ` (usually the key above Tab) = Open/Retract Terminal
  • F12 (customize as you prefer) = Open/Retract Terminal

Posted on Leave a comment

Trace code in Fedora with bpftrace

bpftrace is a new eBPF-based tracing tool that was first included in Fedora 28. It was developed by Brendan Gregg, Alastair Robertson and Matheus Marchini with the help of a loosely-knit team of hackers across the Net. A tracing tool lets you analyze what a system is doing behind the curtain. It tells you which functions in code are being called, with which arguments, how many times, and so on.

This article covers some basics about bpftrace, and how it works. Read on for more information and some useful examples.

eBPF (extended Berkeley Packet Filter)

eBPF is a tiny virtual machine, or a virtual CPU to be more precise, in the Linux Kernel. The eBPF can load and run small programs in a safe and controlled way in kernel space. This makes it safer to use, even in production systems. This virtual machine has its own instruction set architecture (ISA) resembling a subset of modern processor architectures. The ISA makes it easy to translate those programs to the real hardware. The kernel performs just-in-time translation to native code for main architectures to improve the performance.

The eBPF virtual machine allows the kernel to be extended programmatically. Nowadays several kernel subsystems take advantage of this new powerful Linux Kernel capability. Examples include networking, seccomp, tracing, and more. The main idea is to attach eBPF programs into specific code points, and thereby extend the original kernel behavior.

eBPF machine language is very powerful. But writing code directly in it is extremely painful, because it’s a low level language. This is where bpftrace comes in. It provides a high-level language to write eBPF tracing scripts. The tool then translates these scripts to eBPF with the help of clang/LLVM libraries, and then attached to the specified code points.

Installation and quick start

To install bpftrace, run the following command in a terminal using sudo:

$ sudo dnf install bpftrace

Try it out with a “hello world” example:

$ sudo bpftrace -e 'BEGIN { printf("hello world\n"); }'

Note that you must run bpftrace as root due to the privileges required. Use the -e option to specify a program, and to construct the so-called “one-liners.” This example only prints hello world, and then waits for you to press Ctrl+C.

BEGIN is a special probe name that fires only once at the beginning of execution. Every action inside the curly braces { } fires whenever the probe is hit — in this case, it’s just a printf.

Let’s jump now to a more useful example:

$ sudo bpftrace -e 't:syscalls:sys_enter_execve { printf("%s called %s\n", comm, str(args->filename)); }'

This example prints the parent process name (comm) and the name of every new process being created in the system. t:syscalls:sys_enter_execve is a kernel tracepoint. It’s a shorthand for tracepoint:syscalls:sys_enter_execve, but both forms can be used. The next section shows you how to list all available tracepoints.

comm is a bpftrace builtin that represents the process name. filename is a field of the t:syscalls:sys_enter_execve tracepoint. You can access these fields through the args builtin.

All available fields of the tracepoint can be listed with this command:

bpftrace -lv "t:syscalls:sys_enter_execve"

Example usage

Listing probes

A central concept for bpftrace are probe points. Probe points are instrumentation points in code (kernel or userspace) where eBPF programs can be attached. They fit into the following categories:

  • kprobe – kernel function start
  • kretprobe – kernel function return
  • uprobe – user-level function start
  • uretprobe – user-level function return
  • tracepoint – kernel static tracepoints
  • usdt – user-level static tracepoints
  • profile – timed sampling
  • interval – timed output
  • software – kernel software events
  • hardware – processor-level events

All available kprobe/kretprobe, tracepoints, software and hardware probes can be listed with this command:

$ sudo bpftrace -l

The uprobe/uretprobe and usdt probes are userspace probes specific to a given executable. To use them, use the special syntax shown later in this article.

The profile and interval probes fire at fixed time intervals. Fixed time intervals are not covered in this article.

Counting system calls

Maps are special BPF data types that store counts, statistics, and histograms. You can use maps to summarize how many times each syscall is being called:

$ sudo bpftrace -e 't:syscalls:sys_enter_* { @[probe] = count(); }'

Some probe types allow wildcards to match multiple probes. You can also specify multiple attach points for an action block using a comma separated list. In this example, the action block attaches to all tracepoints whose name starts with t:syscalls:sys_enter_, which means all available syscalls.

The bpftrace builtin function count() counts the number of times this function is called. @[] represents a map (an associative array). The key of this map is probe, which is another bpftrace builtin that represents the full probe name.

Here, the same action block is attached to every syscall. Then, each time a syscall is called the map will be updated, and the entry is incremented in the map relative to this same syscall. When the program terminates, it automatically prints out all declared maps.

This example counts the syscalls called globally, it’s also possible to filter for a specific process by PID using the bpftrace filter syntax:

$ sudo bpftrace -e 't:syscalls:sys_enter_* / pid == 1234 / { @[probe] = count(); }'

Write bytes by process

Using these concepts, let’s analyze how many bytes each process is writing:

$ sudo bpftrace -e 't:syscalls:sys_exit_write /args->ret > 0/ { @[comm] = sum(args->ret); }'

bpftrace attaches the action block to the write syscall return probe (t:syscalls:sys_exit_write). Then, it uses a filter to discard the negative values, which are error codes (/args->ret > 0/).

The map key comm represents the process name that called the syscall. The sum() builtin function accumulates the number of bytes written for each map entry or process. args is a bpftrace builtin to access tracepoint’s arguments and return values. Finally, if successful, the write syscall returns the number of written bytes. args->ret provides access to the bytes.

Read size distribution by process (histogram):

bpftrace supports the creation of histograms. Let’s analyze one example that creates a histogram of the read size distribution by process:

$ sudo bpftrace -e 't:syscalls:sys_exit_read { @[comm] = hist(args->ret); }'

Histograms are BPF maps, so they must always be attributed to a map (@). In this example, the map key is comm.

The example makes bpftrace generate one histogram for every process that calls the read syscall. To generate just one global histogram, attribute the hist() function just to ‘@’ (without any key).

bpftrace automatically prints out declared histograms when the program terminates. The value used as base for the histogram creation is the number of read bytes, found through args->ret.

Tracing userspace programs

You can also trace userspace programs with uprobes/uretprobes and USDT (User-level Statically Defined Tracing). The next example uses a uretprobe, which probes to the end of a user-level function. It gets the command lines issued in every bash running in the system:

$ sudo bpftrace -e 'uretprobe:/bin/bash:readline { printf("readline: \"%s\"\n", str(retval)); }'

To list all available uprobes/uretprobes of the bash executable, run this command:

$ sudo bpftrace -l "uprobe:/bin/bash"

uprobe instruments the beginning of a user-level function’s execution, and uretprobe instruments the end (its return). readline() is a function of /bin/bash, and it returns the typed command line. retval is the return value for the instrumented function, and can only be accessed on uretprobe.

When using uprobes, you can access arguments with arg0..argN. A str() call is necessary to turn the char * pointer to a string.

Shipped Scripts

There are many useful scripts shipped with bpftrace package. You can find them in the /usr/share/bpftrace/tools/ directory.

Among them, you can find:

  • killsnoop.bt – Trace signals issued by the kill() syscall.
  • tcpconnect.bt – Trace all TCP network connections.
  • pidpersec.bt – Count new procesess (via fork) per second.
  • opensnoop.bt – Trace open() syscalls.
  • vfsstat.bt – Count some VFS calls, with per-second summaries.

You can directly use the scripts. For example:

$ sudo /usr/share/bpftrace/tools/killsnoop.bt

You can also study these scripts as you create new tools.

Links


Photo by Roman Romashov on Unsplash.

Posted on Leave a comment

4 cool new projects to try in COPR for August 2019

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

Here’s a set of new and interesting projects in COPR.

Duc

Duc is a collection of tools for disk usage inspection and visualization. Duc uses an indexed database to store sizes of files on your system. Once the indexing is done, you can then quickly overview your disk usage either by its command-line interface or the GUI.

Installation instructions

The repo currently provides duc for EPEL 7, Fedora 29 and 30. To install duc, use these commands:

sudo dnf copr enable terrywang/duc sudo dnf install duc

MuseScore

MuseScore is a software for working with music notation. With MuseScore, you can create sheet music either by using a mouse, virtual keyboard or a MIDI controller. MuseScore can then play the created music or export it as a PDF, MIDI or MusicXML. Additionally, there’s an extensive database of sheet music created by Musescore users.

Installation instructions

The repo currently provides MuseScore for Fedora 29 and 30. To install MuseScore, use these commands:

sudo dnf copr enable jjames/MuseScore
sudo dnf install musescore

Dynamic Wallpaper Editor

Dynamic Wallpaper Editor is a tool for creating and editing a collection of wallpapers in GNOME that change in time. This can be done using XML files, however, Dynamic Wallpaper Editor makes this easy with its graphical interface, where you can simply add pictures, arrange them and set the duration of each picture and transitions between them.

Installation instructions

The repo currently provides dynamic-wallpaper-editor for Fedora 30 and Rawhide. To install dynamic-wallpaper-editor, use these commands:

sudo dnf copr enable atim/dynamic-wallpaper-editor
sudo dnf install dynamic-wallpaper-editor

Manuskript

Manuskript is a tool for writers and is aimed to make creating large writing projects easier. It serves as an editor for writing the text itself, as well as a tool for organizing notes about the story itself, characters of the story and individual plots.

Installation instructions

The repo currently provides Manuskript for Fedora 29, 30 and Rawhide. To install Manuskript, use these commands:

sudo dnf copr enable notsag/manuskript sudo dnf install manuskript
Posted on Leave a comment

Use Postfix to get email from your Fedora system

Communication is key. Your computer might be trying to tell you something important. But if your mail transport agent (MTA) isn’t properly configured, you might not be getting the notifications. Postfix is a MTA that’s easy to configure and known for a strong security record. Follow these steps to ensure that email notifications sent from local services will get routed to your internet email account through the Postfix MTA.

Install packages

Use dnf to install the required packages (you configured sudo, right?):

$ sudo -i
# dnf install postfix mailx

If you previously had a different MTA configured, you may need to set Postfix to be the system default. Use the alternatives command to set your system default MTA:

$ sudo alternatives --config mta
There are 2 programs which provide 'mta'. Selection Command
*+ 1 /usr/sbin/sendmail.sendmail 2 /usr/sbin/sendmail.postfix
Enter to keep the current selection[+], or type selection number: 2

Create a password_maps file

You will need to create a Postfix lookup table entry containing the email address and password of the account that you want to use to for sending email:

# MY_EMAIL_ADDRESS=glb@gmail.com
# MY_EMAIL_PASSWORD=abcdefghijklmnop
# MY_SMTP_SERVER=smtp.gmail.com
# MY_SMTP_SERVER_PORT=587
# echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps
# chmod 600 /etc/postfix/password_maps
# unset MY_EMAIL_PASSWORD
# history -c

If you are using a Gmail account, you’ll need to configure an “app password” for Postfix, rather than using your gmail password. See “Sign in using App Passwords” for instructions on configuring an app password.

Next, you must run the postmap command against the Postfix lookup table to create or update the hashed version of the file that Postfix actually uses:

# postmap /etc/postfix/password_maps

The hashed version will have the same file name but it will be suffixed with .db.

Update the main.cf file

Update Postfix’s main.cf configuration file to reference the Postfix lookup table you just created. Edit the file and add these lines.

relayhost = smtp.gmail.com:587
smtp_tls_security_level = verify
smtp_tls_mandatory_ciphers = high
smtp_tls_verify_cert_match = hostname
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/password_maps

The example assumes you’re using Gmail for the relayhost setting, but you can substitute the correct hostname and port for the mail host to which your system should hand off mail for sending.

For the most up-to-date details about the above configuration options, see the man page:

$ man postconf.5

Enable, start, and test Postfix

After you have updated the main.cf file, enable and start the Postfix service:

# systemctl enable --now postfix.service

You can then exit your sudo session as root using the exit command or Ctrl+D. You should now be able to test your configuration with the mail command:

$ echo 'It worked!' | mail -s "Test: $(date)" glb@gmail.com

Update services

If you have services like logwatch, mdadm, fail2ban, apcupsd or certwatch installed, you can now update their configurations so that their email notifications will go to your internet email address.

Optionally, you may want to configure all email that is sent to your local system’s root account to go to your internet email address. Add this line to the /etc/aliases file on your system (you’ll need to use sudo to edit this file, or switch to the root account first):

root: glb+root@gmail.com

Now run this command to re-read the aliases:

# newaliases
  • TIP: If you are using Gmail, you can add an alpha-numeric mark between your username and the @ symbol as demonstrated above to make it easier to identify and filter the email that you will receive from your computer(s).

Troubleshooting

View the mail queue:

$ mailq

Clear all email from the queues:

# postsuper -d ALL

Filter the configuration settings for interesting values:

$ postconf | grep "^relayhost\|^smtp_"

View the postfix/smtp logs:

$ journalctl --no-pager -t postfix/smtp

Reload postfix after making configuration changes:

$ systemctl reload postfix

Photo by Sharon McCutcheon on Unsplash.