This article shows the reader how easy it is to get started using pods with Podman on Fedora. But what is Podman? Well, we will start by saying that Podman is a container engine developed by Red Hat, and yes, if you thought about Docker when reading container engine, you are on the right track. A whole new revolution of containerization started with Docker, and Kubernetes added the concept of pods in the area of container orchestration when dealing with containers that share some common resources. But hold on! Do you really think it is worth sticking with Docker alone by assuming it’s the only effective way of containerization? Podman can also manage pods on Fedora as well as the containers used in those pods.
Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images.
From the official Podman documentation at http://docs.podman.io/en/latest/
Why should we switch to Podman?
Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Podman directly interacts with an image registry, containers and image storage.
Install Podman:
sudo dnf -y install podman
Creating a Pod:
To start using the pod we first need to create it and for that we have a basic command structure
$ podman pod create
The command above contains no arguments and hence it will create a pod with a randomly generated name. You might however, want to give your pod a relevant name. For that you just need to modify the above command a bit.
$ podman pod create --name climoiselle
The pod will be created and will report back to you the ID of the pod. In the example shown the pod was given the name ‘climoiselle’. To view the newly created pod is easy by using the command shown below:
$ podman pod list
Newly created pods have been deployed
As you can see, there are two pods listed here, one named darshna and the one created from the example named climoiselle. No doubt you notice that both pods already include one container, yet we sisn’t deploy a container to the pods yet.
What is that extra container inside the pod? This randomly generated container is an infra container. Every podman pod includes this infra container and in practice these containers do nothing but go to sleep. Their purpose is to hold the namespaces associated with the pod and to allow Podman to connect other containers to the pod. The other purpose of the infra container is to allow the pod to keep running when all associated containers have been stopped.
You can also view the individual containers within a pod with the command:
$ podman ps -a --pod
Add a container
The cool thing is, you can add more containers to your newly deployed pod. Always remember the name of your pod. It’s important as you’ll need that name in order to deploy the container in that pod. We’ll use the official ubuntu image and deploy a container using it running the top command.
$ podman run -dt --pod climoiselle ubuntu top
Everything in a Single Command:
Podman has an agile characteristic when it comes to deploying a container in a pod which you created. You can create a pod and deploy a container to the said pod with a single command using Podman. Let’s say you want to deploy an NGINX container, exposing external port 8080 to internal port 80 to a new pod named test_server.
$ podman run -dt --pod new:test_server -p 8080:80 nginx
Created a new pod and deployed a container together
Let’s check all pods that have been created and the number of containers running in each of them …
$ podman pod list
List of the containers, their state and number of containers running into them
Do you want to know a detailed configuration of the pods which are running? Just type in the command shown below:
podman pod inspect [pod's name/id]
Make it stop!
To stop the pods, we need to use the name or ID of the pod. With the information from podman’s pod list command, we can view the pods and their infra id. Simply use podman with the command stop and give the particular name/infra id of the pod.
$ podman pod stop climoiselle
Hey take a look!
My pod climoiselle stopped
After following this short tutorial, you can see how quickly you can use pods with podman on fedora. It’s an easy and convenient way to use containers that share resources and interact together.
We recently interviewed Ben Cotton on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming an interviewee.
Who is Ben Cotton?
If you follow the Fedora’ Community Blog, there’s a good chance you already know who Ben is.
Ben’s Linux journey started around late 2002. Frustrated with some issues on using Windows XP, and starting a new application administrator role at his university where some services were being run on FreeBSD. A friend introduced him to Red Hat Linux, when Ben decided it made sense to get more practice with Unix-like operating systems. He switched to Fedora full-time in 2006, after he landed a job as a Linux system administrator.
Since then, his career has included system administration, people management, support engineering, development, and marketing. Several years ago, he even earned a Master’s degree in IT Project Management. The variety of experience has helped Ben learn how to work with different groups of people. “A lot of what I’ve learned has come from making mistakes. When you mess up communication, you hopefully do a better job the next time.”
Besides tech, Ben also has a range of well-rounded interests. “I used to do a lot of short fiction writing, but these days I mostly write my opinions about whatever is on my mind.” As for favorite foods, he claims “All of it. Feed me.”
Additionally, Ben has taste that spans genres. His childhood hero was a character from the science fiction series “Star Trek: The Next Generation”. “As a young lad, I wanted very much to be Wesley Crusher.” His favorite movies are a parody film and a spy thriller: “‘Airplane!’ and ‘The Hunt for Red October’” respectively.
When asked for the five greatest qualities he thinks someone can possess, Ben responded cleverly: “Kindness. Empathy. Curiosity. Resilience. Red hair.”
Ben wearing the official “#action bcotton” shirt
His Fedora Story
As a talented writer who described himself as “not much of a programmer”, he selected the Fedora Docs team in 2009 as an entry point into the community. What he found was that “the Friends foundation was evident.” At the time, he wasn’t familiar with tools such as Git, DocBook XML, or Publican (docs toolchain at the time). The community of experienced doc writers helped him get on his feet and freely gave their time. To this day, Ben considers many of them to be his friends and feels really lucky to work with them. Notably “jjmcd, stickster, sparks, and jsmith were a big part of the warm welcome I received.”
Today, as a senior program manager, he describes his job as “Chief Cat Herding Officer”- as his job is largely composed of seeing what different parts of the project are doing and making sure they’re all heading in the same general direction.
Despite having a huge responsibility, Ben also helps a lot in his free time with tasks outside of his job duties, like website work, CommBlog and Magazine editing, packaging, etc… none of which are his core job responsibilities. He tries to find ways to contribute that match his skills and interests. Building credibility, paying attention, developing relationships with other contributors, and showing folks that he’s able to help, is much more important to him than what his “official” title is.
When thinking towards the future, Ben feels hopeful watching the Change proposals come in. “Sometimes they get rejected, but that’s to be expected when you’re trying to advance the state of the art. Fedora contributors are working hard to push the project forward.“
The Fedora Community
As a longtime member of the community, Ben has various notions about the Fedora Project that have been developed over the years. For starters, he wants to make it easier to bring new contributors on board. He believes the Join SIG has “done tremendous work in this area”, but new contributors will keep the community vibrant.
If Ben had to pick a best moment, he’d choose Flock 2018 in Dresden. “That was my first Fedora event and it was great to meet so many of the people who I’ve only known online for a decade.”
As for bad moments, Ben hasn’t had many. Once he accidentally messed up a Bugzilla query resulting in accidental closure of hundreds of bugs and has dealt with some frustrating mailing list threads, but remains positive, affirming that “frustration is okay.”
To those interested in becoming involved in the Fedora Project, Ben says “Come join us!” There’s something to appeal to almost anyone. “Take the time to develop relationships with the people you meet as you join, because without the Friends foundation, the rest falls apart.”
Pet Peeves
One issue he finds challenging is a lack of documentation. “I’ve learned enough over the years that I can sort of bumble through making code changes to things, but a lot of times it’s not clear how the code ties together.” Ben sees how sparse or nonexistent documentation can be frustrating to newcomers who might not have the knowledge that is assumed.
Another concern Ben has is that the “interesting” parts of technology are changing. “Operating systems aren’t as important to end users as they used to be thanks to the rise of mobile computing and Software-as-a-Service. Will this cause our pool of potential new contributors to decrease?”
Likewise, Ben believes that it’s not always easy to get people to understand why they should care about open source software. “The reasons are often abstract and people don’t see that they’re directly impacted, especially when the alternatives provide more immediate convenience.”
What Hardware?
For work, Ben has a ThinkPad X1 Carbon running Fedora 33 KDE. His personal server/desktop is a machine he put together from parts that runs Fedora 33 KDE. He uses it as a file server, print server, Plex media server, and general-purpose desktop. If he has some spare time to get it started, Ben also has an extra laptop that he wants to start using to test Beta releases and “maybe end up running rawhide on it”.
What Software?
Ben has been a KDE user for a decade. A lot of his work is done in a web browser (Firefox for work stuff, Chrome for personal). He does most of his scripting in Python these days, with some inherited scripts in Perl.
Notable applications that Ben uses include:
Cherrytree for note-taking
Element for IRC
Inkscape and Kdenlive when he needs to edit videos.
Vim on the command line and Kate when he wants a GUI
Sometimes there is a need to add another disk to your system. This is where Logical Volume Management (LVM) comes in handy. The cool thing about LVM is that it’s fairly flexible. There are several ways to add a disk. This article describes one way to do it.
Heads up!
This article does not cover the process of physically installing a new disk drive into your system. Consult your system and disk documentation on how to do that properly.
Important: Always make sure you have backups of important data. The steps described in this article will destroy data if it already exists on the new disk.
Good to know
This article doesn’t cover every LVM feature deeply; the focus is on adding a disk. But basically, LVM has volume groups, made up of one or more partitions and/or disks. You add the partitions or disks as physical volumes. A volume group can be broken down into many logical volumes. Logical volumes can be used as any other storage for filesystems, ramdisks, etc. More information can be found here.
Think of the physical volumes as forming a pool of storage (a volume group) from which you then carve out logical volumes for your system to use directly.
Preparation
Make sure you can see the disk you want to add. Use lsblk prior to adding the disk to see what storage is already available or in use.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
zram0 251:0 0 989M 0 disk [SWAP]
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
└─fedora_fedora-root 253:0 0 19G 0 lvm /
This article uses a virtual machine with virtual storage. Therefore the device names start with vda for the first disk, vdb for the second, and so on. The name of your device may be different. Many systems will see physical disks as sda for the first disk, sdb for the second, and so on.
Once the new disk has been connected and your system is back up and running, use lsblk again to see the new block device.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
zram0 251:0 0 989M 0 disk [SWAP]
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
└─fedora_fedora-root 253:0 0 19G 0 lvm /
vdb 252:16 0 10G 0 disk
There is now a new device named vdb. The location for the device is /dev/vdb.
$ ls -l /dev/vdb
brw-rw----. 1 root disk 252, 16 Nov 24 12:56 /dev/vdb
We can see the disk, but we cannot use it with LVM yet. If you run blkid you should not see it listed. For this and following commands, you’ll need to ensure your system is configured so you can use sudo:
Initialize the disk using pvcreate. You need to pass the full path to the device. In this example it is /dev/vdb; on your system it may be /dev/sdb or another device name.
You can get a detailed list of the specific volume group and physical volumes as well:
$ sudo vgdisplay fedora_fedora
--- Volume group ---
VG Name fedora_fedora
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 28.99 GiB
PE Size 4.00 MiB
Total PE 7422
Alloc PE / Size 4863 / 19.00 GiB
Free PE / Size 2559 / 10.00 GiB
VG UUID C5dL2s-dirA-SQ15-TfQU-T3yt-l83E-oI6pkp
Look at the PV:
$ sudo pvdisplay /dev/vdb --- Physical volume --- PV Name /dev/vdb VG Name fedora_fedora PV Size 10.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 2559 Free PE 2559 Allocated PE 0 PV UUID 4uUUuI-lMQY-WyS5-lo0W-lqjW-Qvqw-RqeroE
Now that we have added the disk, we can allocate space to logical volumes (LVs):
Look at the logical volumes. Here’s a detailed look at the root LV:
$ sudo lvdisplay fedora_fedora/root
--- Logical volume ---
LV Path /dev/fedora_fedora/root
LV Name root
VG Name fedora_fedora
LV UUID yqc9cw-AvOw-G1Ni-bCT3-3HAa-qnw3-qUSHGM
LV Write Access read/write
LV Creation host, time fedora, 2020-11-24 11:44:36 -0500
LV Status available
LV Size 19.00 GiB
Current LE 4863
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
Look at the size of the root filesystem and compare it to the logical volume size.
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora_fedora-root 19G 1.4G 17G 8% /
The logical volume and the filesystem both agree the size is 19G. Let’s add 5G to the root logical volume:
$ sudo lvresize -L +5G fedora_fedora/root
Size of logical volume fedora_fedora/root changed from 19.00 GiB (4863 extents) to 24.00 GiB (6143 extents).
Logical volume fedora_fedora/root successfully resized.
We now have 24G available to the logical volume. Look at the / filesystem.
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora_fedora-root 19G 1.4G 17G 8% /
We are still showing only 19G free. This is because the logical volume is not the same as the filesytem. To use the new space added to the logical volume, resize the filesystem.
$ sudo resize2fs /dev/fedora_fedora/root
resize2fs 1.45.6 (20-Mar-2020)
Filesystem at /dev/fedora_fedora/root is mounted on /; on-line resizing required
old_desc_blocks = 3, new_desc_blocks = 3
The filesystem on /dev/fedora_fedora/root is now 6290432 (4k) blocks long.
Look at the size of the filesystem.
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora_fedora-root 24G 1.4G 21G 7% /
As you can see, the root file system (/) has taken all of the space available on the logical volume and no reboot was needed.
You have now initialized a disk as a physical volume, and extended the volume group with the new physical volume. After that you increased the size of the logical volume, and resized the filesystem to use the new space from the logical volume.
There are, like most things in the Unix/Linux world, many ways of doing things with Vagrant, but here are some examples of ways to grow your Vagrantfile portfolio and increase your knowledge and use.
If you have not yet installed vagrant you can follow the first part of this series.
Also in this section you can configure provider-specific options. In this case the provider is libvirt, and the specific config looks like this:
config.vm.provider :libvirt do |libvirt|
libvirt.cpus = 1
libvirt.memory = 512
In the example above, all libvirt VMs will be created with a single CPU and 512Mb of memory unless specifically overridden.
The VM namespace is where you define all machines you want this Vagrantfile to build. Notice that this is still a part of the config section, and lines should therefore begin with ‘config’. All sections or parts of sections have an ‘end’ statement to close them off.
Creating multiple machines at once
Depending on what you need to achieve, this can be a simple loop or multiple machine definitions. To create any number of machines in a series, with the same settings but perhaps different names and/or IP addresses, you can just provide a range as shown here:
(1..5).each do |i|
config.vm.define "server#{i}" do |server|
server.vm.hostname = "server#{i}.example.com"
end
end
This will create 5 servers, named server1, server2, server3 etc.
Of note, using Ruby style “for i in 1..3 do” doesn’t work despite Vagrantfile syntax actually being Ruby, so use the method from the example above.
If you need servers with different hostnames, different hardware etc then you’ll need to specify them individually, or at least in groups if the situation lends itself to that. Let’s say you need to create a typical web/db/load balancer infrastructure, with 2 web servers, a single database server and a load balancer for the web traffic. Ignoring the specific software setup for this, to simply create the virtual machines ready for provisioning you could use something like this:
# Load Balancer
config.vm.define "loadbal", primary: true do |loadbal|
loadbal.vm.hostname = "loadbal"
end
# Database
config.vm.define "db", primary: true do |db|
db.vm.hostname = "db"
end
# Web Servers x2
(1..2).each do |i|
config.vm.define "web#{i}" do |web|
web.vm.hostname = "web#{i}"
end
end
This uses a combination of multiple machine calls and a small loop to build 4 VMs with a single ‘vagrant up’ command.
Networking
Vagrant generally creates its own network for VM access, and you use this with ‘vagrant ssh’. If you create more than one VM then you must use the VM name to identify which one you wish to connect to – vagrant ssh vmname.
There are a number of configuration options available which allow you to interact with your VMs in various ways.
The vagrant-libvirt plugin creates a network for the guests to use. This is automated and will always be present even if you define your own networks. The network is named “vagrant-libvirt” and can be seen either in the Virtual Networks tab of virt-manager’s connection details or by issuing a sudo virsh net-list command.
If you use dhcp for your guests, you can find the individual IP addresses with the virsh net-dhcp-list command: sudo virsh net-dhcp-leases vagrant-libvirt
Port Forwarding
The simplest change to default networking is port forwarding. This uses a simple format like most Vagrant config: config.vm.network “forwarded_port”, guest: 80, host: 8080
This listens to port 8080 on your local machine and forwards connections to port 80 on the Vagrant machine. If you need to use a UDP port, simply add , protocol: “udp” to the end of that line (notice that comma which should come immediately after the second port number).
Obviously for more complex configurations this might not be ideal, as you need to specify every single port you want to forward. If you then add multiple machines the complexity can really become too much.
In addition to this, anyone on your network can access these ports if they know your IP address, so that’s something you should be aware of.
Public Network
This creates a network card for the Vagrant VM which connects to your host network, and will therefore be visible to all machines on that network. As Vagrant is not designed to be secure, you should be aware of any vulnerabilities and take steps to protect against them.
To configure a public network, add config.vm.network “public_network” to your Vagrantfile. This will use DHCP to obtain a network address.
If you wish to assign a static IP address, you can add one to the end of the network declaration: config.vm.network “public_network”, ip: “192.168.0.1”
If you’re creating multiple guests you can put the network configuration in the vm namespace, and even allocate IPs based on iteration too:
Vagrant.configure("2") do |config|
config.vm.box = "centos/8"
config.vm.provider :libvirt do |libvirt|
libvirt.qemu_use_session = false
end
# Servers x2
(1..2).each do |i|
config.vm.define "server#{i}" do |server|
server.vm.hostname = "server#{i}"
server.vm.network "public_network", ip: "192.168.122.20#{i}"
end
end
end
Private Network
This works very much like the Public Network option, only the network is only available to the host machine and the Vagrant guests. The syntax is almost identical too: config.vm.network “private_network”, type: “dhcp”
This will create a new network in libvirt, usually named something like “vagrant-private-dhcp” – you can see this with the command sudo virsh net-list while the VM is running. This network is created and destroyed along with the vagrant guests.
Again, the network config can be specified for all guests, or per guest as shown in the public network example above.
Provisioning
Once you have your VMs defined, you can obviously then do whatever you want with them, but as soon as you issue a ‘vagrant destroy’ command any changes will be lost. This is where automated provisioning comes in.
You can use several methods to provision your machines, from simple file copies to shell scripts, Ansible, Chef and Puppet. Many of the main methods can be used, but I’ll cover the simple ones here – if you need to use something else please read the documentation as it’s all covered.
File uploads
To copy a file to the Vagrant guest, add a line to the Vagrantfile like this:
The directory structure should already exist on the Vagrant host, and will be copied in its entirety, including subdirectories and files.
Note: If you add a trailing slash to the destination path, the source path will be placed under this so make sure you only do this if you want that outcome. For example, if the above destination was “$HOME/remote/newfolder/”, then the result would see “$HOME/remote/newfolder/folder” created with the contents of the source placed here.
Shell commands
You can include individual commands, inline scripts or external scripts to perform provisioning tasks.
A single command would take this form, and any valid command line command can be used here: config.vm.provision “shell”, inline: “sudo dnf update -y”
An inline script is less common, and declared at the top of the Vagrantfile then called during provisioning:
$script = <<-SCRIPT
echo I am provisioning...
date > /etc/vagrant_provisioned_at
SCRIPT
Vagrant.configure("2") do |config|
config.vm.provision "shell", inline: $script
end
More common is the external shell script, which gives more flexibility and makes code more modular. Vagrant uploads the file to the guest then executes it. Simply call the script in the provisioning line:
config.vm.provision “shell”, path: “script.sh”
The file need not be local to the Vagrant host either:
You specify an Ansible playbook to provision your VM in the following way:
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
This then calls the playbook, which will run as any externally-run ansible playbook would.
If you’re building multiple VMs with your Vagrantfile then it’s likely you want different configurations for some of them, and in this case you should provision within the definition of each VM, as shown here:
# Web Servers x2
(1..2).each do |i|
config.vm.define "web#{i}" do |web|
web.vm.hostname = "web#{i}"
web.vm.provision "ansible" do |ansible|
ansible.playbook = "web.yml"
end
end
end
Ansible provisioners come in two formats – ansible and ansible_local. The ansible provisioner requires that Ansible is installed on the Vagrant host, and will connect remotely to your guest VMs to provision them. This means all necessary ssh authentication must be in place for it to work. The ansible_local provisioner executes playbooks directly on the guest VMs, which therefore requires Ansible be installed on each of the guests you want to provision. Vagrant will try to install Ansible on the guests in order to do this, (This can be controlled with the install option, but is enabled by default). On RHEL-style systems like Fedora, Ansible is installed from the EPEL repository. Simply use either ansible or ansible_local in the config_vm_provision command to choose the style you need.
Synced Folders
Vagrant allows you to sync folders between your Vagrant host and your guests, allowing access to configuration files, data etc. By default, the folder containing the Vagrant file is shared and mounted under /vagrant on each guest.
To configure additional synced folders, use the config.vm.synced.folder command:
config.vm.synced_folder "src/", "/srv/website"
The two parameters are the source folder on the Vagrant host and the mount directory on the guest. The destination folder will be created if it does not exist, recursively if necessary.
Options for synced folders allow you to configure them better, including the option to disable them completely. Other options allow you to specify a group owner of the folder (group), the folder owner (owner), plus mount options. There are others but these are the main ones.
You can disable the default share with the following command:
When using Vagrant on a Linux host, synced folders use NFS (with the exception of the default share which uses rsync; see below) so you must have NFS installed on the Vagrant host, and the guests also need NFS support installation. To use NFS with non-Linux hosts, simply specify the folder type as ‘nfs’:
These are the easiest to use as they usually work without any intervention on a Linux host. This is a one-way sync from host to guest performed at startup (vagrant up) or after a vagrant reload command is issued. The default share of the Vagrant project directory is done with rsync. To configure a synced folder with rsync, specify the type as ‘rsync’:
When adding storage to a Linux server, system administrators often use commands like pvcreate, vgcreate, lvcreate, and mkfs to integrate the new storage into the system. Stratis is a command-line tool designed to make managing storage much simpler. It creates, modifies, and destroys pools of storage. It also allocates and deallocates filesystems from the storage pools.
Instead of an entirely in-kernel approach like ZFS or Btrfs, Stratis uses a hybrid approach with components in both user space and kernel land. It builds on existing block device managers like device mapper and existing filesystems like XFS. Monitoring and control is performed by a user space daemon.
Stratis tries to avoid some ZFS characteristics like restrictions on adding new hard drives or replacing existing drives with bigger ones. One of its main design goals is to achieve a positive command-line experience.
Install Stratis
Begin by installing the required packages. Several Python-related dependencies will be automatically pulled in. The stratisd package provides the stratisd daemon which creates, manages, and monitors local storage pools. The stratis-cli package provides the stratis command along with several Python libraries.
# yum install -y stratisd stratis-cli
Next, enable the stratisd service.
# systemctl enable --now stratisd
Note that the “enable –now” syntax shown above both permanently enables and immediately starts the service.
After determining what disks/block devices are present and available, the three basic steps to using Stratis are:
Create a pool of the desired disks.
Create a filesystem in the pool.
Mount the filesystem.
In the following example, four virtual disks are available in a virtual machine. Be sure not to use the root/system disk (/dev/vda in this example)!
# stratis pool create testpool /dev/vdb /dev/vdc
# stratis pool list
Name Total Physical Size Total Physical Used
testpool 10 GiB 56 MiB
After creating the pool, check the status of its block devices:
# stratis blockdev list
Pool Name Device Node Physical Size State Tier
testpool /dev/vdb 5 GiB In-use Data
testpool /dev/vdc 5 GiB In-use Data
Create a filesystem using Stratis
Next, create a filesystem. As mentioned earlier, Stratis uses the existing DM (device mapper) and XFS filesystem technologies to create thinly-provisioned filesystems. By building on these existing technologies, large filesystems can be created and it is possible to add physical storage as storage needs grow.
# stratis fs create testpool testfs
# stratis fs list
Pool Name Name Used Created Device UUID
testpool testfs 546 MiB Apr 18 2020 09:15 /stratis/testpool/testfs 095fb4891a5743d0a589217071ff71dc
Note that “fs” in the example above can optionally be written out as “filesystem”.
Mount the filesystem
Next, create a mount point and mount the filesystem.
# mkdir /testdir
# mount /stratis/testpool/testfs /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc 1.0T 7.2G 1017G 1% /testdir
The actual space used by a filesystem is shown using the stratis fs list command demonstrated previously. Notice how the testdir filesystem has a virtual size of 1.0T. If the data in a filesystem approaches its virtual size, and there is available space in the storage pool, Stratis will automatically grow the filesystem. Note that beginning with Fedora 34, the form of device path will be /dev/stratis/<pool-name>/<filesystem-name>.
Add the filesystem to fstab
To configure automatic mounting of the filesystem at boot time, run following commands:
After updating fstab, verify that the entry is correct by unmounting and mounting the filesystem:
# umount /testdir
# mount /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc 1.0T 7.2G 1017G 1% /testdir
Adding cache devices with Stratis
Suppose /dev/vdd is an available SSD (solid state disk). To configure it as a cache device and check its status, use the following commands:
# stratis pool add-cache testpool /dev/vdd
# stratis blockdev
Pool Name Device Node Physical Size State Tier
testpool /dev/vdb 5 GiB In-use Data
testpool /dev/vdc 5 GiB In-use Data
testpool /dev/vdd 5 GiB In-use Cache
Growing the storage pool
Suppose the testfs filesystem is close to using all the storage capacity of testpool. You could add an additional disk/block device to the pool with commands similar to the following:
# stratis pool add-data testpool /dev/vde
# stratis blockdev
Pool Name Device Node Physical Size State Tier
testpool /dev/vdb 5 GiB In-use Data
testpool /dev/vdc 5 GiB In-use Data
testpool /dev/vdd 5 GiB In-use Cache
testpool /dev/vde 5 GiB In-use Data
After adding the device, verify that the pool shows the added capacity:
# stratis pool
Name Total Physical Size Total Physical Used
testpool 15 GiB 606 MiB
Conclusion
Stratis is a tool designed to make managing storage much simpler. Creating a filesystem with enterprise functionalities like thin-provisioning, snapshots, volume management, and caching can be accomplished quickly and easily with just a few basic commands.
This has been called the age of DevOps, and operating systems seem to be getting a little bit less attention than tools are. However, this doesn’t mean that there has been no innovation in operating systems. [Edit: The diversity of offerings from the plethora of distributions based on the Linux kernel is a fine example of this.] Fedora CoreOS has a specific philosophy of what an operating system should be in this age of DevOps.
Fedora CoreOS’ philosophy
Fedora CoreOS (FCOS) came from the merging of CoreOS Container Linux and Fedora Atomic Host. It is a minimal and monolithic OS focused on running containerized applications. Security being a first class citizen, FCOS provides automatic updates and comes with SELinux hardening.
For automatic updates to work well they need to be very robust. The goal being that servers running FCOS won’t break after an update. This is achieved by using different release streams (stable, testing and next). Each stream is released every 2 weeks and content is promoted from one stream to the other (next -> testing -> stable). That way updates landing in the stable stream have had the opportunity to be tested over a long period of time.
Getting Started
For this example let’s use the stable stream and a QEMU base image that we can run as a virtual machine. You can use coreos-installer to download that image.
From your (Workstation) terminal, run the following commands after updating the link to the image. [Edit: On Silverblue the container based coreos tools are the simplest method to try. Instructions can be found at https://docs.fedoraproject.org/en-US/fedora-coreos/tutorial-setup/ , in particular “Setup with Podman or Docker”.]
To customize a FCOS system, you need to provide a configuration file that will be used by Ignition to provision the system. You may use this file to configure things like creating a user, adding a trusted SSH key, enabling systemd services, and more.
The following configuration creates a ‘core’ user and adds an SSH key to the authorized_keys file. It is also creating a systemd service that uses podman to run a simple hello world container.
version: "1.0.0"
variant: fcos
passwd: users: - name: core ssh_authorized_keys: - ssh-ed25519 my_public_ssh_key_hash fcos_key
systemd: units: - contents: | [Unit] Description=Run a hello world web service After=network-online.target Wants=network-online.target [Service] ExecStart=/bin/podman run --pull=always --name=hello --net=host -p 8080:8080 quay.io/cverna/hello ExecStop=/bin/podman rm -f hello [Install] WantedBy=multi-user.target enabled: true name: hello.service
After adding your SSH key in the configuration save it as config.yaml. Next use the Fedora CoreOS Config Transpiler (fcct) tool to convert this YAML configuration into a valid Ignition configuration (JSON format).
Install fcct directly from Fedora’s repositories or get the binary from GitHub.
Once the installation is successful, some information is displayed and a login prompt is provided.
Fedora CoreOS 32.20200907.3.0
Kernel 5.8.10-200.fc32.x86_64 on an x86_64 (ttyS0)
SSH host key: SHA256:BJYN7AQZrwKZ7ZF8fWSI9YRhI++KMyeJeDVOE6rQ27U (ED25519)
SSH host key: SHA256:W3wfZp7EGkLuM3z4cy1ZJSMFLntYyW1kqAqKkxyuZrE (ECDSA)
SSH host key: SHA256:gb7/4Qo5aYhEjgoDZbrm8t1D0msgGYsQ0xhW5BAuZz0 (RSA)
ens2: 192.168.122.237 fe80::5054:ff:fef7:1a73
Ignition: user provided config was applied
Ignition: wrote ssh authorized keys file for user: core
The Ignition configuration file did not provide any password for the core user, therefore it is not possible to login directly via the console. (Though, it is possible to configure a password for users via Ignition configuration.)
Use Ctrl + ] key combination to exit the virtual machine’s console. Then check if the hello.service is running.
$ curl http://192.168.122.237:8080
Hello from Fedora CoreOS!
Using the preconfigured SSH key, you can also access the VM and inspect the services running on it.
$ ssh core@192.168.122.237
$ systemctl status hello
● hello.service - Run a hello world web service
Loaded: loaded (/etc/systemd/system/hello.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-28 10:10:26 UTC; 42s ago
zincati, rpm-ostree and automatic updates
The zincati service drives rpm-ostreed with automatic updates. Check which version of Fedora CoreOS is currently running on the VM, and check if Zincati has found an update.
$ ssh core@192.168.122.237
$ rpm-ostree status
State: idle
Deployments:
● ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20200907.3.0 (2020-09-23T08:16:31Z)
Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
$ systemctl status zincati
● zincati.service - Zincati Update Agent
Loaded: loaded (/usr/lib/systemd/system/zincati.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-28 13:36:23 UTC; 7s ago
…
Oct 28 13:36:24 cosa-devsh zincati[1013]: [INFO ] initialization complete, auto-updates logic enabled
Oct 28 13:36:25 cosa-devsh zincati[1013]: [INFO ] target release '32.20201004.3.0' selected, proceeding to stage it ... zincati reboot ...
After the restart, let’s remote login once more to check the new version of Fedora CoreOS.
rpm-ostree status now shows 2 versions of Fedora CoreOS, the one that came in the QEMU image, and the latest one received from the update. By having these 2 versions available, it is possible to rollback to the previous version using the rpm-ostree rollback command.
Finally, you can make sure that the hello service is still running and serving content.
$ curl http://192.168.122.237:8080
Hello from Fedora CoreOS!
Fedora CoreOS provides a solid and secure operating system tailored to run applications in containers. It excels in a DevOps environment which encourages the hosts to be provisioned using declarative configuration files. Automatic updates and the ability to rollback to a previous version of the OS, bring a peace of mind during the operation of a service.
Learn more about Fedora CoreOS by following the tutorials available in the project’s documentation.
Containerization is a booming technology. As many as seventy-five percent of global organizations could be running some type of containerization technology in the near future. Since widely used technologies are more likely to be targeted by hackers, securing containers is especially important. This article will demonstrate how POSIX capabilities are used to secure Podman containers. Podman is the default container management tool in RHEL8.
Determine the Podman container’s privilege mode
Containers run in either privileged or unprivileged mode. In privileged mode, the container uid 0 is mapped to the host’s uid 0. For some use cases, unprivileged containers lack sufficient access to the resources of the host machine. Technologies and techniques including Mandatory Access Control (apparmor, SELinux), seccomp filters, dropping of capabilities, and namespaces help to secure containers regardless of their mode of operation.
To determine the privilege mode from outside the container:
If the above command returns true then the container is running in privileged mode. If it returns false then the container is running in unprivileged mode.
To determine the privilege mode from inside the container:
$ ip link add dummy0 type dummy
If this command allows you to create an interface then you are running a privileged container. Otherwise you are running an unprivileged container.
Capabilities
Namespaces isolate a container’s processes from arbitrary access to the resources of its host and from access to the resources of other containers running on the same host. Processes within privileged containers, however, might still be able to do things like alter the IP routing table, trace arbitrary processes, and load kernel modules. Capabilities allow one to apply finer-grained restrictions on what resources the processes within a container can access or alter; even when the container is running in privileged mode. Capabilities also allow one to assign privileges to an unprivileged container that it would not otherwise have.
For example, to add the NET_ADMIN capability to an unprivileged container so that a network interface can be created inside of the container, you would run podman with parameters similar to the following:
[root@vm1 ~]# podman run -it --cap-add=NET_ADMIN centos
[root@b27fea33ccf1 /]# ip link add dummy0 type dummy
[root@b27fea33ccf1 /]# ip link
The above commands demonstrate a dummy0 interface being created in an unprivileged container. Without the NET_ADMIN capability, an unprivileged container would not be able to create an interface. The above commands demonstrate how to grant a capability to an unprivileged container.
Currently, there are about 39 capabilities that can be granted or denied. Privileged containers are granted many capabilities by default. It is advisable to drop unneeded capabilities from privileged containers to make them more secure.
To drop all capabilities from a container:
$ podman run -it -d --name mycontainer --cap-drop=all centos
To list a container’s capabilities:
$ podman exec -it 48f11d9fa512 capsh --print
The above command should show that no capabilities are granted to the container.
Refer to the capabilities man page for a complete list of capabilities:
$ man capabilities
Use the capsh command to list the capabilities you currently possess:
$ capsh --print
As another example, the below command demonstrates dropping the NET_RAW capability from a container. Without the NET_RAW capability, servers on the internet cannot be pinged from within the container.
$ podman run -it --name mycontainer1 --cap-drop=net_raw centos
>>> ping google.com (will output error, operation not permitted)
As a final example, if your container were to only need the SETUID and SETGID capabilities, you could achieve such a permission set by dropping all capabilities and then re-adding only those two.
If you’re like me, you may find yourself running Windows for a variety of reasons from work to gaming. Sure you could run Fedora in a virtual machine or as a container, but those don’t blend into a common windows experience as easily as the Windows Subsystem for Linux (WSL). Using Fedora via WSL will let you blend the two environments together for a fantastic development environment.
Prerequisites
There are a few basics you’ll need in order to make this all work. You should be running Windows 10, and have WSL2 installed already. If not, check out the Microsoft documentation for instructions, and come back here when you’re finished. Microsoft recommends setting wsl2 as the distro default for simplicity. This guide assumes you’ve done that.
Next, you’re going to need some means of unpacking xz compressed files. You can do this with another WSL-based distribution, or use 7zip.
Download a Fedora 33 rootfs
Since Fedora doesn’t ship an actual rootfs archive, we’re going to abuse the one used to generate the container image for dockerhub. You will want to download the tar.xz file from the fedora-cloud GitHub repository. Once you have the tar.xz, uncompress it, but don’t unpack it. You want to end up with something like fedora-33-datestamp.tar. Once you have that, you’re ready to build the image.
Composing the WSL Fedora build
I prefer to use c:\distros, but you can choose nearly whatever location you want. Whatever you choose, make sure the top level path exists before you import the build. Now open a cmd or powershell prompt, because it’s time to import:
PS C:\Users\jperrin> wsl.exe -l -v
NAME STATE VERSION
Fedora-33 Stopped 2
From here, you can start to play around with Fedora in wsl, but we have a few things we need to do to make it actually useful as a wsl distro.
wsl -d Fedora-33
This will launch Fedora’s wsl instance as the root user. From here, you’re going to install a few core packages and set a new default user. You’re also going to need to configure sudo, otherwise you won’t be able to easily elevate privileges if you need to install something else later.
wslutilites uses curl and wget for things like VS Code integration, so they’re useful to have around. Since you need to use a Copr repo for this, you want the added dnf functionality.
Add your user
Now it’s time to add your user, and set it as the default.
useradd -G wheel yourusername
passwd yourusername
Now that you’ve created your username and added a password, make sure they work. Exit the wsl instance, and launch it again, this time specifying the username. You’re also going to test sudo, and check your uid.
Assuming everything worked fine, you’re now ready to set the default user for your Fedora setup in Windows. To do this, exit the wsl instance and get back into Powershell. This Powershell one-liner configures your user properly:
Now you should be able to launch WSL again without specifying a user, and be yourself instead of root.
Customize!
From here, you’re done getting the basic Fedora 33 setup running in wsl, but it doesn’t have the Windows integration piece yet. If this is something you want, there’s a Copr repo to enable. If you choose to add this piece, you’ll be able to run Windows apps directly from inside your shell, as well as integrate your Linux environment easily with VS Code. Note that Copr is not officially supported by Fedora infrastructure. Use packages at your own risk
dnf copr enable trustywolf/wslu
Now you can go configure your terminal, setup a Python development environment, or however else you want to use Fedora 33. Enjoy!
Stratis is described on its official website as an “easy to use local storage management for Linux.” See this short video for a quick demonstration of the basics. The video was recorded on a Red Hat Enterprise Linux 8 system. The concepts shown in the video also apply to Stratis in Fedora.
Stratis version 2.1 introduces support for encryption. Continue reading to learn how to get started with encryption in Stratis.
Prerequisites
Encryption requires Stratis version 2.1 or greater. The examples in this post use a pre-release of Fedora 33. Stratis 2.1 will be available in the final release of Fedora 33.
You’ll also need at least one available block device to create an encrypted pool. The examples shown below were done on a KVM virtual machine with a 5 GB virtual disk drive (/dev/vdb).
Create a key in the kernel keyring
The Linux kernel keyring is used to store the encryption key. For more information on the kernel keyring, refer to the keyrings manual page (man keyrings).
Use the stratis key set command to set up the key within the kernel keyring. You must specify where the key should be read from. To read the key from standard input, use the –capture-key option. To retrieve the key from a file, use the –keyfile-path <file> option. The last parameter is a key description. It will be used later when you create the encrypted Stratis pool.
For example, to create a key with the description pool1key, and to read the key from standard input, you would enter:
# stratis key set --capture-key pool1key
Enter desired key data followed by the return key:
The command prompts us to type the key data / passphrase, and the key is then created within the kernel keyring.
To verify that the key was created, run stratis key list:
# stratis key list
Key Description
pool1key
This verifies that the pool1key was created. Note that these keys are not persistent. If the host is rebooted, the key will need to be provided again before the encrypted Stratis pool can be accessed (this process is covered later).
If you have multiple encrypted pools, they can have a separate keys, or they can share the same key.
The keys can also be viewed using the following keyctl commands:
Now that a key has been created for Stratis, the next step is to create the encrypted Stratis pool. Encrypting a pool can only be done at pool creation. It isn’t currently possible to encrypt an existing pool.
Use the stratis pool create command to create a pool. Add –key-desc and the key description that you provided in the previous step (pool1key). This will signal to Stratis that the pool should be encrypted using the provided key. The below example creates the Stratis pool on /dev/vdb, and names it pool1. Be sure to specify an empty/available device on your system.
# stratis pool create --key-desc pool1key pool1 /dev/vdb
You can verify that the pool has been created with the stratis pool list command:
# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 37.63 MiB / 4.95 GiB ~Ca, Cr
In the sample output shown above, ~Ca indicates that caching is disabled (the tilde negates the property). Cr indicates that encryption is enabled. Note that caching and encryption are mutually exclusive. Both features cannot be simultaneously enabled.
Next, create a filesystem. The below example, demonstrates creating a filesystem named filesystem1, mounting it at the /filesystem1 mountpoint, and creating a test file in the new filesystem:
# stratis filesystem create pool1 filesystem1
# mkdir /filesystem1
# mount /stratis/pool1/filesystem1 /filesystem1
# cd /filesystem1
# echo "this is a test file" > testfile
Access the encrypted pool after a reboot
When you reboot you’ll notice that Stratis no longer shows your encrypted pool or its block device:
# stratis pool list
Name Total Physical Properties
# stratis blockdev list
Pool Name Device Node Physical Size Tier
To access the encrypted pool, first re-create the key with the same key description and key data / passphrase that you used previously:
# stratis key set --capture-key pool1key
Enter desired key data followed by the return key:
Next, run the stratis pool unlock command, and verify that you can now see the pool and its block device:
# stratis pool unlock
# stratis pool list
Name Total Physical Properties
pool1 4.98 GiB / 583.65 MiB / 4.41 GiB ~Ca, Cr
# stratis blockdev list
Pool Name Device Node Physical Size Tier
pool1 /dev/dm-2 4.98 GiB Data
Next, mount the filesystem and verify that you can access the test file you created previously:
# mount /stratis/pool1/filesystem1 /filesystem1/
# cat /filesystem1/testfile
this is a test file
Use a systemd unit file to automatically unlock a Stratis pool at boot
It is possible to automatically unlock your Stratis pool at boot without manual intervention. However, a file containing the key must be available. Storing the key in a file might be a security concern in some environments.
The systemd unit file shown below provides a simple method to unlock a Stratis pool at boot and mount the filesystem. Feedback on a better/alternative methods is welcome. You can provide suggestions in the comment section at the end of this article.
Start by creating your key file with the following command. Be sure to substitute passphrase with the same key data / passphrase you entered previously.
LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes. It is commonly used on Fedora installations (prior to BTRFS as default it was LVM+Ext4). But have you ever started up your system to find a message like the image above, after you logged in? Uh oh, Gnome just said the home volume is almost out of space! Luckily, there is likely some space sitting around in another volume, unused and ready to re-alocate. Here’s how to reclaim hard-drive space with LVM.
The key to easily re-alocate space between volumes is the Logical Volume Manager (LVM). Fedora 32 and before use LVM to divide disk space by default. This technology is similar to standard hard-drive partitions, but LVM is a lot more flexible. LVM enables not only flexible volume size management, but also advanced capabilities such as read-write snapshots, striping or mirroring data across multiple drives, using a high-speed drive as a cache for a slower drive, and much more. All of these advanced options can get a bit overwhelming, but resizing a volume is straight-forward.
LVM basics
The volume group serves as the main container in the LVM system. By default Fedora only defines a single volume group, but there can be as many as needed. Actual hard-drive and hard-drive partitions are added to the volume group as physical volumes. Physical volumes add available free space to the volume group. A typical Fedora install has one formatted boot partition, and the rest of the drive is a partition configured as an LVM physical volume.
Out of this pool of available space, the volume group allocates one or more logical volumes. These volumes are similar to hard-drive partitions, but without the limitation of contiguous space on the disk. LVM logical volumes can even span multiple devices! Just like hard-drive partitions, logical volumes have a defined size and can contain any filesystem which can then be mounted to specific directories.
What’s needed
Confirm the system uses LVM with the gnome-disks application, and make sure there is free space available in some other volume. Without space to reclaim from another volume, this guide isn’t useful. A Fedora live CD/USB is also needed. Any file system that needs to shrink must be unmounted. Running from a live image allows all the volumes on the hard-disk to remain unmounted, even important directories like / and /home.
Use gnome-disks to verify free space
A word of warning
No data should be lost by following this guide, but it does muck around with some very low-level and powerful commands. One mistake could destroy all data on the hard-drive. So backup all the data on the disk first!
Resizing LVM volumes
To begin, boot the Fedora live image and select Try Fedora at the dialog. Next, use the Run Command to launch the blivet-gui application (accessible by pressing Alt-F2, typing blivet-gui, then pressing enter). Select the volume group on the left under LVM. The logical volumes are on the right.
Explore logical volumes in blivet-gui
The logical volume labels consist of both the volume group name and the logical volume name. In the example, the volume group is “fedora_localhost-live” and there are “home”, “root”, and “swap” logical volumes allocated. To find the full volume, select each one, click on the gear icon, and choose resize. The slider in the resize dialog indicates the allowable sizes for the volume. The minimum value on the left is the space already in use within the file system, so this is the minimum possible volume size (without deleting data). The maximum value on the right is the greatest size the volume can have based on available free space in the volume group.
Resize dialog in blivet-gui
A grayed out resize option means the volume is full and there is no free space in the volume group. It’s time to change that! Look through all of the volumes to find one with plenty of extra space, like in the screenshot above. Move the slider to the left to set the new size. Free up enough space to be useful for the full volume, but still leave plenty of space for future data growth. Otherwise, this volume will be the next to fill up.
Click resize and note that a new item appears in the volume listing: free space. Now select the full volume that started this whole endeavor, and move the slider all the way to the right. Press resize and marvel at the new improved volume layout. However, nothing has changed on the hard drive yet. Click on the check-mark to commit the changes to disk.
Review changes in blivet-gui
Review the summary of the changes, and if everything looks right, click Ok to proceed. Wait for blivet-gui to finish. Now reboot back into the main Fedora install and enjoy all the new space in the previously full volume.
Planning for the future
It is challenging to know how much space any particular volume will need in the future. Instead of immediately allocating all available free space, consider leaving it free in the volume group. In fact, Fedora Server reserves space in the volume group by default. Extending a volume is possible while it is online and in use. No live image or reboot needed. When a volume is almost full, easily extend the volume using part of the available free space and keep working. Unfortunately the default disk manager, gnome-disks, does not support LVM volume resizing, so install blivet-gui for a graphical management tool. Alternately, there is a simple terminal command to extend a volume:
Reclaiming hard-drive space with LVM just scratches the surface of LVM capabilities. Most people, especially on the desktop, probably don’t need the more advanced features. However, LVM is there when the need arises, though it can get a bit complex to implement. BTRFS is the default filesystem, without LVM, starting with Fedora 33. BTRFS can be easier to manage while still flexible enough for most common usages. Check out the recent Fedora Magazine articles on BTRFS to learn more.