Posted on Leave a comment

Add storage to your Fedora system with LVM

Sometimes there is a need to add another disk to your system. This is where Logical Volume Management (LVM) comes in handy. The cool thing about LVM is that it’s fairly flexible. There are several ways to add a disk. This article describes one way to do it.

Heads up!

This article does not cover the process of physically installing a new disk drive into your system. Consult your system and disk documentation on how to do that properly.

Important: Always make sure you have backups of important data. The steps described in this article will destroy data if it already exists on the new disk.

Good to know

This article doesn’t cover every LVM feature deeply; the focus is on adding a disk. But basically, LVM has volume groups, made up of one or more partitions and/or disks. You add the partitions or disks as physical volumes. A volume group can be broken down into many logical volumes. Logical volumes can be used as any other storage for filesystems, ramdisks, etc. More information can be found here.

Think of the physical volumes as forming a pool of storage (a volume group) from which you then carve out logical volumes for your system to use directly.

Preparation

Make sure you can see the disk you want to add. Use lsblk prior to adding the disk to see what storage is already available or in use.

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
zram0 251:0 0 989M 0 disk [SWAP]
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
└─fedora_fedora-root 253:0 0 19G 0 lvm /

This article uses a virtual machine with virtual storage. Therefore the device names start with vda for the first disk, vdb for the second, and so on. The name of your device may be different. Many systems will see physical disks as sda for the first disk, sdb for the second, and so on.

Once the new disk has been connected and your system is back up and running, use lsblk again to see the new block device.

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
zram0 251:0 0 989M 0 disk [SWAP]
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
└─fedora_fedora-root 253:0 0 19G 0 lvm /
vdb 252:16 0 10G 0 disk

There is now a new device named vdb. The location for the device is /dev/vdb.

$ ls -l /dev/vdb
brw-rw----. 1 root disk 252, 16 Nov 24 12:56 /dev/vdb

We can see the disk, but we cannot use it with LVM yet. If you run blkid you should not see it listed. For this and following commands, you’ll need to ensure your system is configured so you can use sudo:

$ sudo blkid
/dev/vda1: UUID="4847cb4d-6666-47e3-9e3b-12d83b2d2448" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="830679b8-01"
/dev/vda2: UUID="k5eWpP-6MXw-foh5-Vbgg-JMZ1-VEf9-ARaGNd" TYPE="LVM2_member" PARTUUID="830679b8-02"
/dev/mapper/fedora_fedora-root: UUID="f8ab802f-8c5f-4766-af33-90e78573f3cc" BLOCK_SIZE="4096" TYPE="ext4"
/dev/zram0: UUID="fc6d7a48-2bd5-4066-9bcf-f062b61f6a60" TYPE="swap"

Add the disk to LVM

Initialize the disk using pvcreate. You need to pass the full path to the device. In this example it is /dev/vdb; on your system it may be /dev/sdb or another device name.

$ sudo pvcreate /dev/vdb
Physical volume "/dev/vdb" successfully created.

You should see the disk has been initialized as an LVM2_member when you run blkid:

$ sudo blkid
/dev/vda1: UUID="4847cb4d-6666-47e3-9e3b-12d83b2d2448" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="830679b8-01"
/dev/vda2: UUID="k5eWpP-6MXw-foh5-Vbgg-JMZ1-VEf9-ARaGNd" TYPE="LVM2_member" PARTUUID="830679b8-02"
/dev/mapper/fedora_fedora-root: UUID="f8ab802f-8c5f-4766-af33-90e78573f3cc" BLOCK_SIZE="4096" TYPE="ext4"
/dev/zram0: UUID="fc6d7a48-2bd5-4066-9bcf-f062b61f6a60" TYPE="swap"
/dev/vdb: UUID="4uUUuI-lMQY-WyS5-lo0W-lqjW-Qvqw-RqeroE" TYPE="LVM2_member"

You can list all physical volumes currently available using pvs:

$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/vda2 fedora_fedora lvm2 a-- <19.00g 0
/dev/vdb lvm2 --- 10.00g 10.00g

/dev/vdb is listed as a PV (phsyical volume), but it isn’t assigned to a VG (Volume Group) yet.

Add the pysical volume to a volume group

You can find a list of available volume groups using vgs:

$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
fedora_fedora 1 1 0 wz--n- 19.00g 0

In this example, there is only one volume group available. Next, add the physical volume to fedora_fedora:

$ sudo vgextend fedora_fedora /dev/vdb
Volume group "fedora_fedora" successfully extended

You should now see the physical volume is added to the volume group:

$ sudo pvs PV VG Fmt Attr PSize PFree
/dev/vda2 fedora_fedora lvm2 a– <19.00g 0
/dev/vdb fedora_fedora lvm2 a– <10.00g <10.00g

Look at the volume groups:

$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
fedora_fedora 2 1 0 wz–n- 28.99g <10.00g

You can get a detailed list of the specific volume group and physical volumes as well:

$ sudo vgdisplay fedora_fedora
--- Volume group ---
VG Name fedora_fedora
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 28.99 GiB
PE Size 4.00 MiB
Total PE 7422
Alloc PE / Size 4863 / 19.00 GiB
Free PE / Size 2559 / 10.00 GiB
VG UUID C5dL2s-dirA-SQ15-TfQU-T3yt-l83E-oI6pkp

Look at the PV:

$ sudo pvdisplay /dev/vdb --- Physical volume --- PV Name /dev/vdb VG Name fedora_fedora PV Size 10.00 GiB / not usable 4.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 2559 Free PE 2559 Allocated PE 0 PV UUID 4uUUuI-lMQY-WyS5-lo0W-lqjW-Qvqw-RqeroE 

Now that we have added the disk, we can allocate space to logical volumes (LVs):

$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root fedora_fedora -wi-ao---- 19.00g

Look at the logical volumes. Here’s a detailed look at the root LV:

$ sudo lvdisplay fedora_fedora/root
--- Logical volume ---
LV Path /dev/fedora_fedora/root
LV Name root
VG Name fedora_fedora
LV UUID yqc9cw-AvOw-G1Ni-bCT3-3HAa-qnw3-qUSHGM
LV Write Access read/write
LV Creation host, time fedora, 2020-11-24 11:44:36 -0500
LV Status available
LV Size 19.00 GiB
Current LE 4863
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

Look at the size of the root filesystem and compare it to the logical volume size.

$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora_fedora-root 19G 1.4G 17G 8% /

The logical volume and the filesystem both agree the size is 19G. Let’s add 5G to the root logical volume:

$ sudo lvresize -L +5G fedora_fedora/root
Size of logical volume fedora_fedora/root changed from 19.00 GiB (4863 extents) to 24.00 GiB (6143 extents).
Logical volume fedora_fedora/root successfully resized.

We now have 24G available to the logical volume. Look at the / filesystem.

$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora_fedora-root 19G 1.4G 17G 8% /

We are still showing only 19G free. This is because the logical volume is not the same as the filesytem. To use the new space added to the logical volume, resize the filesystem.

$ sudo resize2fs /dev/fedora_fedora/root
resize2fs 1.45.6 (20-Mar-2020)
Filesystem at /dev/fedora_fedora/root is mounted on /; on-line resizing required
old_desc_blocks = 3, new_desc_blocks = 3
The filesystem on /dev/fedora_fedora/root is now 6290432 (4k) blocks long.

Look at the size of the filesystem.

$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora_fedora-root 24G 1.4G 21G 7% /

As you can see, the root file system (/) has taken all of the space available on the logical volume and no reboot was needed.

You have now initialized a disk as a physical volume, and extended the volume group with the new physical volume. After that you increased the size of the logical volume, and resized the filesystem to use the new space from the logical volume.

Posted on Leave a comment

Wave Engine 3.1 Released

Wave Engine recently released version 3.1. Wave Engine is a completely free to use 3D game engine capable of targeting most platforms and XR devices. We have been keeping an eye on this engine since 2015 when we featured it in the Closer Look series. More recently we looked at Wave Engine again in 2019 when WaveEngine 3.0 was previewed after a long period of silence. After another long period of silence we received the 3.1 release which brings .NET 5 and C# 9 support as well as graphical improvements.

Details from a guest post on the DotNet team blog:

We are glad to announce that, aligned with Microsoft, we have just released WaveEngine 3.1 with official support for .NET 5 and C# 9. So if you are using C# and .NET 5, you can start creating 3D apps based on .NET 5 today. Download it from the WaveEngine download page right now and start creating 3D apps based on .NET 5 today. We would like to share with you our journey migrating from .NET Core 3.1 to .NET 5, as well as some of the new features made possible with .NET 5.

From .NET Core 3.1 to .NET 5

To make this possible we started working on this one year ago, when we decide to rewrite our low-level graphics abstraction API to support the new Vulkan, DirectX12 and Metal graphics APIs. At that time, it was a project based on .NET Framework with an editor based on GTK# which had problems to support new resolutions, multiscreen or the new DPI standards. At that time, we were following all the great advances in performance that Microsoft was doing in .NET Core and the future framework called .NET 5 and we decided that we had to align our engine with this to take advantage of all the new performance features, so we started writing a new editor based on WPF and .NET Core and changed all our extensions and libraries to .NET Core. This took us one year of hard work but the results comparing our old version 2.5 and the new one 3.1 in terms of performance and memory usage are awesome, around 4-5x faster.

Now we have official support for .NET 5 and this technology is ready for .NET 6 so we are glad to become one of the first engines to support it.

In the video below we review Wave Engine 3.1. All of the samples used in the video are available on GitHub. Please note this repository should not be cloned, it simply links to a different repository for each sample.

[youtube https://www.youtube.com/watch?v=9zIQHBPW1E4?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Unity Make MLAPI Official Networking Library for GameObjects

Way back in 2018 Unity announced the official deprecation of the UNET networking solution, which consisted of the low level (LLAPI) and high level (HLAPI) networking solutions. The reasons for the deprecation were:

Through our connected games initiatives, we’re revamping how we can make networked games easier, more performant, and multiplayer-ready by default. To make these important changes, we need to start anew. That means existing multiplayer features will be gradually deprecated, with more performant, scalable, and secure technologies taking their place. But don’t worry – games with impacted features will have plenty of time to react.

At this point in time the future was clearly DOTS (Data Oriented Technology Stack) which included a new solution called Unity NetCode. In the end the migration to DOTS hasn’t gone as smoothly as expected and recently Unity have started back-filling support for GameObject based development. One recent example was the acquisition, then subsequent free release of the Bolt visual scripting solution. Today Unity have made a similar move, by adopting the open source MLAPI networking project as the new official Unity networking solution. Details from the Unity blog:

One of Unity’s top priorities for 2021 is to expand the Unity ecosystem with a first-party multiplayer networking solution for GameObjects that is easy to set up and extend, scales to meet the needs of high-performance titles, and is seamlessly integrated into the Unity ecosystem.

The existing UNet HLAPI architecture is not well suited for the in-depth evolution that is required to support games at scale. Rest assured, we don’t want to reinvent the wheel. The ecosystem currently offers multiple strong solutions, and the best path toward providing you with the scalable framework we envision is to build on the amazing work that already exists in the community. 

We considered various open source software (OSS) alternatives and found a framework that fit our needs. We’re thrilled to share that the OSS multiplayer networking framework MLAPI is joining the Unity family, along with its creator, Albin Corén.

As of today, we’re already working on integrating and evolving MLAPI into what will become Unity’s first-party GameObjects netcode solution. We plan to continue the development fully open source. Developing in the open and welcoming community contributions. If you are interested, you can join us on the GitHub MLAPI repo.

You can learn more about MLAPI and the ongoing saga of networking on Unity in the video below.

[youtube https://www.youtube.com/watch?v=Zc3lLnE7zFs?feature=oembed&w=1500&h=844]
Posted on Leave a comment

FMOD Studio Now Free For Indie Game Developers

FMOD, perhaps the most popular audio middleware solution for games, just updated their indie developer licenses effectively making the use of FMOD free for smaller indie game developers. So what defines an indie game developer here? First you need to make less than $200K gross revenue per year and second, you need to have less than $500K USD in funding for your game title. There are also some limitations on industry, so for example gambling and simulation projects do not qualify for this license.

The primary details of this announcement came via this tweet:

FMOD Free indie license tweet.

The key paragraph from the linked legal document is the following:

This EULA grants you the right to use FMOD Studio Engine, for Commercial use, subject to the following:

  • Development budget of the project is less than $500k (Refer to www.fmod.com/licensing#licensing-faq for information);
  • Total gross revenue / funding per year for the developer, before expenses, is less than $200k (Refer to www.fmod.com/licensing#licensing-faq for information);
  • FMOD Studio Engine is integrated and redistributed in a game application (Product) only;
  • FMOD Studio Engine is not distributed as part of a game engine or tool set;
  • Project is registered in your profile page at www.fmod.com/profile#projects;
  • Product includes attribution in accordance with Clause 3.

More details about the licensing changes are available here. FMod has support for several game engines including Unreal and Unity, if you are a Godot developer there is FMOD support available via this project, as well as a GDNative version available here. To learn more about FMOD and the new indie licensing check out the video below.

[youtube https://www.youtube.com/watch?v=XF-AbQHme3s?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Unreal Engine 4.26 Released

Epic Games have just release UE 4.26. In this release we see features such as hair and anisotrophy reach production ready status. Additionally there is a new water simulation system (previewed here) and better integration of the new Chaos Physics System (tutorial here) and a brand new system for creating better skies, lighting and environmental clouds. Additionally there were several advancements on the film making side of the equation along side hundreds of other small improvements and bug fixes. With each new Unreal Engine release more and more functionality traditionally done in your DCC tool of choice such as modelling, rigging, animating and sculpting are being added to Unreal.

A summary of new features from the Unreal Engine 4.26 release notes:

The production-ready Hair, Fur, and Feathers system enables you to design the most believable humans and animals. You can use the Volumetric Cloud component along with the Sky Atmosphere and Sky Light to author and render realistic or stylized skies, clouds, and other atmospheric effects with full artistic freedom. The new Water System makes it possible to create believable bodies of water within your landscape terrains that react with your characters, vehicles, and weapons. With an improved and expanded feature set, Chaos physics now lets you simulate Vehicles, Cloth, and Ragdolls in addition to Rigid Bodies so every aspect of the environment comes to life.

Sequencer now works in conjunction with Control Rig and the new full-body IK solution to create new animations inside of Sequencer, reducing the need to use external tools. Movie Render Queue (formerly known as High Quality Media Export) has been enhanced to support render passes enabling further enhancements to be made to the final image in a downstream compositing application. nDisplay multi-display rendering is easier to set up and configure in addition to enabling more pixels to be rendered at a higher frame rate, thus increasing performance and supporting larger LED volumes with existing hardware. The Collaborative Viewer Template has been significantly improved to enhance the collaborative design review experience, and enable more users to join a session. The Remote Control API has been improved to seamlessly connect properties and functionality in Unreal Editor to UI widgets giving users the ability to quickly change properties from an external device, such as an artist on stage changing the sky rotation or the sun position from an iPad.

In the video below we take a quick look at the new water system as well as a quick tutorial on creating Hair alembic files using Blender for export to Unreal Engine, then quickly showcase the new Groom hair functionality.

[youtube https://www.youtube.com/watch?v=72nts1vPcDk?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Godot 4 New 2D Features Showcase

Over on the Godot Engine blog they recently put together a summary of some of the exciting new 2D features that will arrive in Godot 4. Today we go hands-on with the majority of these new features. In addition to the new tricks showcased in the video there are other 2D improvements in Godot 4 including across the board performance improvements (due to the new Vulkan renderer and internal optimizations) as well as support for 2D signed distance fields.

Highlighted new features include:

  • new 2D CanvasTexture with support for diffuse, normal and specular maps
  • better support for 2D lights (all drawn in a single pass)
  • directional light and shadow support
  • new child clipping feature
  • new CanvasGroup for grouping multiple sprites into a single draw call

In addition to these new features, Godot have also just released an update on the improvements to tilemap support, which will be covered in a separate video. Check out the video below to see these excellent new Godot 2D features in action.

[youtube https://www.youtube.com/watch?v=_VR-xHsio78?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Vagrant beyond the basics

There are, like most things in the Unix/Linux world, many ways of doing things with Vagrant, but here are some examples of ways to grow your Vagrantfile portfolio and increase your knowledge and use.

If you have not yet installed vagrant you can follow the first part of this series.

Some Vagrantfile basics

All Vagrantfiles start with “Vagrant.configure(“2”) do |config|” and finish with a corresponding “end”:

 
Vagrant.configure("2") do |config|
  ...
  ...
end

The “2” represents the version of Vagrant, and is currently either 1 or 2. Unless you need to use the older version simply stick with the latest.

The config structure is broken down into namespaces:

config.vm – modify the configuration of the machine(s) that Vagrant manages.

config.ssh – for configuring how Vagrant will access your machine over SSH.

config.winrm – configuring how Vagrant will access your Windows guest over WinRM.

config.winssh – the WinSSH communicator is built specifically for the Windows native port of OpenSSH.

config.vagrant – modify the behavior of Vagrant itself.

Each line in a namespace begins with the word ‘config’:

config.vm.box = “fedora/32-cloud-base”
config.vm.network “private_network”

There are many options here, and a read of the documentation pages is strongly recommended. They can be found at https://www.vagrantup.com/docs/vagrantfile

Also in this section you can configure provider-specific options. In this case the provider is libvirt, and the specific config looks like this:

 
config.vm.provider :libvirt do |libvirt|
  libvirt.cpus = 1
  libvirt.memory = 512

In the example above, all libvirt VMs will be created with a single CPU and 512Mb of memory unless specifically overridden.

The VM namespace is where you define all machines you want this Vagrantfile to build. Notice that this is still a part of the config section, and lines should therefore begin with ‘config’. All sections or parts of sections have an ‘end’ statement to close them off.

Creating multiple machines at once

Depending on what you need to achieve, this can be a simple loop or multiple machine definitions. To create any number of machines in a series, with the same settings but perhaps different names and/or IP addresses, you can just provide a range as shown here:

 
(1..5).each do |i|
  config.vm.define "server#{i}" do |server|
    server.vm.hostname = "server#{i}.example.com"
  end
end

This will create 5 servers, named server1, server2, server3 etc.

Of note, using Ruby style “for i in 1..3 do” doesn’t work despite Vagrantfile syntax actually being Ruby, so use the method from the example above.

If you need servers with different hostnames, different hardware etc then you’ll need to specify them individually, or at least in groups if the situation lends itself to that. Let’s say you need to create a typical web/db/load balancer infrastructure, with 2 web servers, a single database server and a load balancer for the web traffic. Ignoring the specific software setup for this, to simply create the virtual machines ready for provisioning you could use something like this:

 
# Load Balancer
config.vm.define "loadbal", primary: true do |loadbal|
  loadbal.vm.hostname = "loadbal"
end

# Database
config.vm.define "db", primary: true do |db|
  db.vm.hostname = "db"
end

# Web Servers x2
(1..2).each do |i|
  config.vm.define "web#{i}" do |web|
    web.vm.hostname = "web#{i}"
  end
end

This uses a combination of multiple machine calls and a small loop to build 4 VMs with a single ‘vagrant up’ command.

Networking

Vagrant generally creates its own network for VM access, and you use this with ‘vagrant ssh’. If you create more than one VM then you must use the VM name to identify which one you wish to connect to – vagrant ssh vmname.

There are a number of configuration options available which allow you to interact with your VMs in various ways.

The vagrant-libvirt plugin creates a network for the guests to use. This is automated and will always be present even if you define your own networks. The network is named “vagrant-libvirt” and can be seen either in the Virtual Networks tab of virt-manager’s connection details or by issuing a sudo virsh net-list command.

If you use dhcp for your guests, you can find the individual IP addresses with the virsh net-dhcp-list command: sudo virsh net-dhcp-leases vagrant-libvirt

Port Forwarding

The simplest change to default networking is port forwarding. This uses a simple format like most Vagrant config: config.vm.network “forwarded_port”, guest: 80, host: 8080

This listens to port 8080 on your local machine and forwards connections to port 80 on the Vagrant machine. If you need to use a UDP port, simply add , protocol: “udp” to the end of that line (notice that comma which should come immediately after the second port number).

Obviously for more complex configurations this might not be ideal, as you need to specify every single port you want to forward. If you then add multiple machines the complexity can really become too much.

In addition to this, anyone on your network can access these ports if they know your IP address, so that’s something you should be aware of.

Public Network

This creates a network card for the Vagrant VM which connects to your host network, and will therefore be visible to all machines on that network. As Vagrant is not designed to be secure, you should be aware of any vulnerabilities and take steps to protect against them.

To configure a public network, add config.vm.network “public_network” to your Vagrantfile. This will use DHCP to obtain a network address.

If you wish to assign a static IP address, you can add one to the end of the network declaration: config.vm.network “public_network”, ip: “192.168.0.1”

If you’re creating multiple guests you can put the network configuration in the vm namespace, and even allocate IPs based on iteration too:

 
Vagrant.configure("2") do |config|
  config.vm.box = "centos/8"
  config.vm.provider :libvirt do |libvirt|
    libvirt.qemu_use_session = false
  end

  # Servers x2
  (1..2).each do |i|
    config.vm.define "server#{i}" do |server|
      server.vm.hostname = "server#{i}"
      server.vm.network "public_network", ip: "192.168.122.20#{i}"
    end
  end
end

Private Network

This works very much like the Public Network option, only the network is only available to the host machine and the Vagrant guests. The syntax is almost identical too: config.vm.network “private_network”, type: “dhcp”

 To use a static IP address, simply add it:

 
config.vm.network "private_network", ip: "192.168.50.4"

This will create a new network in libvirt, usually named something like “vagrant-private-dhcp” – you can see this with the command sudo virsh net-list while the VM is running. This network is created and destroyed along with the vagrant guests.

Again, the network config can be specified for all guests, or per guest as shown in the public network example above.

Provisioning

Once you have your VMs defined, you can obviously then do whatever you want with them, but as soon as you issue a ‘vagrant destroy’ command any changes will be lost. This is where automated provisioning comes in.

You can use several methods to provision your machines, from simple file copies to shell scripts, Ansible, Chef and Puppet. Many of the main methods can be used, but I’ll cover the simple ones here – if you need to use something else please read the documentation as it’s all covered.

File uploads

To copy a file to the Vagrant guest, add a line to the Vagrantfile like this:

 
config.vm.provision "file", source: "~/myfile", destination: "myfile"

You can copy directories too:

 
config.vm.provision "file", source: "~/path/to/host/folder", destination: "$HOME/remote/newfolder"

The directory structure should already exist on the Vagrant host, and will be copied in its entirety, including subdirectories and files.

Note: If you add a trailing slash to the destination path, the source path will be placed under this so make sure you only do this if you want that outcome. For example, if the above destination was “$HOME/remote/newfolder/”, then the result would see “$HOME/remote/newfolder/folder” created with the contents of the source placed here.

Shell commands

You can include individual commands, inline scripts or external scripts to perform provisioning tasks.

A single command would take this form, and any valid command line command can be used here: config.vm.provision “shell”, inline: “sudo dnf update -y”

An inline script is less common, and declared at the top of the Vagrantfile then called during provisioning:

 
$script = &lt;&lt;-SCRIPT
echo I am provisioning...
date > /etc/vagrant_provisioned_at
SCRIPT

Vagrant.configure("2") do |config|
  config.vm.provision "shell", inline: $script
end

More common is the external shell script, which gives more flexibility and makes code more modular. Vagrant uploads the file to the guest then executes it. Simply call the script in the provisioning line:

config.vm.provision “shell”, path: “script.sh”

The file need not be local to the Vagrant host either:

config.vm.provision “shell”, path: “https://example.com/provisioner.sh”

Ansible

To use Ansible to provision your VMs you must have it installed on the Vagrant host; see https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-rhel-centos-or-fedora.

You specify an Ansible playbook to provision your VM in the following way:

 
config.vm.provision "ansible" do |ansible|
  ansible.playbook = "playbook.yml"
end

This then calls the playbook, which will run as any externally-run ansible playbook would.

If you’re building multiple VMs with your Vagrantfile then it’s likely you want different configurations for some of them, and in this case you should provision within the definition of each VM, as shown here:

 
# Web Servers x2
(1..2).each do |i|
  config.vm.define "web#{i}" do |web|
    web.vm.hostname = "web#{i}"
    web.vm.provision "ansible" do |ansible|
      ansible.playbook = "web.yml"
    end
  end
end

Ansible provisioners come in two formats – ansible and ansible_local. The ansible provisioner requires that Ansible is installed on the Vagrant host, and will connect remotely to your guest VMs to provision them. This means all necessary ssh authentication must be in place for it to work. The ansible_local provisioner executes playbooks directly on the guest VMs, which therefore requires Ansible be installed on each of the guests you want to provision. Vagrant will try to install Ansible on the guests in order to do this, (This can be controlled with the install option, but is enabled by default). On RHEL-style systems like Fedora, Ansible is installed from the EPEL repository. Simply use either ansible or ansible_local in the config_vm_provision command to choose the style you need.

Synced Folders

Vagrant allows you to sync folders between your Vagrant host and your guests, allowing access to configuration files, data etc. By default, the folder containing the Vagrant file is shared and mounted under /vagrant on each guest.

To configure additional synced folders, use the config.vm.synced.folder command:

 
config.vm.synced_folder "src/", "/srv/website"

The two parameters are the source folder on the Vagrant host and the mount directory on the guest. The destination folder will be created if it does not exist, recursively if necessary.

Options for synced folders allow you to configure them better, including the option to disable them completely. Other options allow you to specify a group owner of the folder (group), the folder owner (owner), plus mount options. There are others but these are the main ones.

You can disable the default share with the following command:

 
config.vm.synced_folder ".", "/vagrant", disabled: true

Other options are configured as follows:

 
config.vm.synced_folder "src/", "/srv/website",
  owner: "apache", group: "apache"

NFS synced folders

When using Vagrant on a Linux host, synced folders use NFS (with the exception of the default share which uses rsync; see below) so you must have NFS installed on the Vagrant host, and the guests also need NFS support installation. To use NFS with non-Linux hosts, simply specify the folder type as ‘nfs’:

 
config.vm.synced_folder ".", "/vagrant", type: "nfs"

RSync synced folders

These are the easiest to use as they usually work without any intervention on a Linux host. This is a one-way sync from host to guest performed at startup (vagrant up) or after a vagrant reload command is issued. The default share of the Vagrant project directory is done with rsync. To configure a synced folder with rsync, specify the type as ‘rsync’:

 
config.vm.synced_folder ".", "/vagrant", type: "rsync"

Posted on Leave a comment

Unreal Engine Asset Giveaway For December 2020

Every month Epic Games giveaways several assets on the Unreal Engine marketplace and December is no exception. This month we have 5 new assets that are available for free until the first Tuesday in January, but once “purchased” those assets are yours to use free forever. Speaking of forever, there is also one new asset in the permanently free collection.

This months free assets include:

This months permanently free asset is:

A common question with these assets is can the be used outside of Unreal Engine. Generally the answer is yes, unless the asset was owned or sourced directly by Epic Games, like the Power IK asset this month, in which case it can only be used in Unreal Engine projects. You can learn more about this months UE4 asset giveaway in the video below.

[youtube https://www.youtube.com/watch?v=mrX4ZpqUfVU?feature=oembed&w=1500&h=844]
Posted on Leave a comment

GDevelop Game Engine Revisted

We first looked at the GDevelop game engine back in 2017 in our Closer Look Game Engine series. In the intervening years, GDevelop 5 has come a long way, bringing more and more features to this impressive open source cross platform 2D game engine. In the past year there have been over a dozen new beta releases to the engine including several community contributions. There have also been some updates as a result of the 2020 Google Summer of Code. While many of these releases aren’t large enough to justify a video, taken as a whole it is certainly time to revisit this game engine and the improvements it has seen.

Some of the highlights of recent releases include:

  • add support for a new asset store with hundreds of ready made game objects
  • new analytics system without requiring a third party solution
  • better support for right to left languages
  • support for dynamic 2D lights
  • customizable keyboard shortcuts
  • peer to peer communication extension
  • live preview (hot reloading) support
  • command palette for quickly launching editors
  • new editor themes

These are just a few highlights of the dozens of releases over the last few months. If you are interested in checking out GDevelop it’s available for Windows, Mac, Linux and Online. It is also an open source project with the source code available on GitHub under the MIT open source license. If you want to learn more or run into problems, be sure to check out their Discord server. You can learn more about GDevelop and see it in action in the video below.

[youtube https://www.youtube.com/watch?v=0Vni4hXQAx8?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Getting started with Stratis – up and running

When adding storage to a Linux server, system administrators often use commands like pvcreate, vgcreate, lvcreate, and mkfs to integrate the new storage into the system. Stratis is a command-line tool designed to make managing storage much simpler. It creates, modifies, and destroys pools of storage. It also allocates and deallocates filesystems from the storage pools.

Instead of an entirely in-kernel approach like ZFS or Btrfs, Stratis uses a hybrid approach with components in both user space and kernel land. It builds on existing block device managers like device mapper and existing filesystems like XFS. Monitoring and control is performed by a user space daemon.

Stratis tries to avoid some ZFS characteristics like restrictions on adding new hard drives or replacing existing drives with bigger ones. One of its main design goals is to achieve a positive command-line experience.

Install Stratis

Begin by installing the required packages. Several Python-related dependencies will be automatically pulled in. The stratisd package provides the stratisd daemon which creates, manages, and monitors local storage pools. The stratis-cli package provides the stratis command along with several Python libraries.

# yum install -y stratisd stratis-cli

Next, enable the stratisd service.

# systemctl enable --now stratisd

Note that the “enable –now” syntax shown above both permanently enables and immediately starts the service.

After determining what disks/block devices are present and available, the three basic steps to using Stratis are:

  1. Create a pool of the desired disks.
  2. Create a filesystem in the pool.
  3. Mount the filesystem.

In the following example, four virtual disks are available in a virtual machine. Be sure not to use the root/system disk (/dev/vda in this example)!

# sfdisk -s
/dev/vda: 31457280
/dev/vdb:   5242880
/dev/vdc:   5242880
/dev/vdd:   5242880
/dev/vde:   5242880
total: 52428800 blocks

Create a storage pool using Stratis

# stratis pool create testpool /dev/vdb /dev/vdc
# stratis pool list
Name Total Physical Size  Total Physical Used
testpool 10 GiB 56 MiB

After creating the pool, check the status of its block devices:

# stratis blockdev list
Pool Name   Device Node Physical Size   State  Tier
testpool  /dev/vdb            5 GiB  In-use  Data
testpool  /dev/vdc            5 GiB  In-use  Data

Create a filesystem using Stratis

Next, create a filesystem. As mentioned earlier, Stratis uses the existing DM (device mapper) and XFS filesystem technologies to create thinly-provisioned filesystems. By building on these existing technologies, large filesystems can be created and it is possible to add physical storage as storage needs grow.

# stratis fs create testpool testfs
# stratis fs list
Pool Name  Name  Used Created        Device            UUID
testpool  testfs 546 MiB  Apr 18 2020 09:15 /stratis/testpool/testfs  095fb4891a5743d0a589217071ff71dc

Note that “fs” in the example above can optionally be written out as “filesystem”.

Mount the filesystem

Next, create a mount point and mount the filesystem.

# mkdir /testdir
# mount /stratis/testpool/testfs /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc  1.0T  7.2G 1017G   1% /testdir

The actual space used by a filesystem is shown using the stratis fs list command demonstrated previously. Notice how the testdir filesystem has a virtual size of 1.0T. If the data in a filesystem approaches its virtual size, and there is available space in the storage pool, Stratis will automatically grow the filesystem. Note that beginning with Fedora 34, the form of device path will be /dev/stratis/<pool-name>/<filesystem-name>.

Add the filesystem to fstab

To configure automatic mounting of the filesystem at boot time, run following commands:

# UUID=`lsblk -n -o uuid /stratis/testpool/testfs`
# echo "UUID=${UUID} /testdir xfs defaults 0 0" >> /etc/fstab

After updating fstab, verify that the entry is correct by unmounting and mounting the filesystem:

# umount /testdir
# mount /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc  1.0T  7.2G 1017G   1% /testdir

Adding cache devices with Stratis

Suppose /dev/vdd is an available SSD (solid state disk). To configure it as a cache device and check its status, use the following commands:

# stratis pool add-cache testpool  /dev/vdd
# stratis blockdev
Pool Name   Device Node Physical Size  State   Tier
testpool   /dev/vdb            5 GiB  In-use   Data
testpool   /dev/vdc            5 GiB  In-use   Data
testpool   /dev/vdd            5 GiB  In-use  Cache

Growing the storage pool

Suppose the testfs filesystem is close to using all the storage capacity of testpool. You could add an additional disk/block device to the pool with commands similar to the following:

# stratis pool add-data testpool /dev/vde
# stratis blockdev
Pool Name Device Node Physical Size   State   Tier
testpool   /dev/vdb           5 GiB  In-use   Data
testpool   /dev/vdc           5 GiB  In-use   Data
testpool   /dev/vdd           5 GiB  In-use  Cache
testpool   /dev/vde           5 GiB  In-use   Data

After adding the device, verify that the pool shows the added capacity:

# stratis pool
Name      Total Physical Size   Total Physical Used
testpool             15 GiB           606 MiB

Conclusion

Stratis is a tool designed to make managing storage much simpler. Creating a filesystem with enterprise functionalities like thin-provisioning, snapshots, volume management, and caching can be accomplished quickly and easily with just a few basic commands.

See also Getting Started with Stratis Encryption.