Posted on Leave a comment

Vagrant beyond the basics

There are, like most things in the Unix/Linux world, many ways of doing things with Vagrant, but here are some examples of ways to grow your Vagrantfile portfolio and increase your knowledge and use.

If you have not yet installed vagrant you can follow the first part of this series.

Some Vagrantfile basics

All Vagrantfiles start with “Vagrant.configure(“2”) do |config|” and finish with a corresponding “end”:

 
Vagrant.configure("2") do |config|
  ...
  ...
end

The “2” represents the version of Vagrant, and is currently either 1 or 2. Unless you need to use the older version simply stick with the latest.

The config structure is broken down into namespaces:

config.vm – modify the configuration of the machine(s) that Vagrant manages.

config.ssh – for configuring how Vagrant will access your machine over SSH.

config.winrm – configuring how Vagrant will access your Windows guest over WinRM.

config.winssh – the WinSSH communicator is built specifically for the Windows native port of OpenSSH.

config.vagrant – modify the behavior of Vagrant itself.

Each line in a namespace begins with the word ‘config’:

config.vm.box = “fedora/32-cloud-base”
config.vm.network “private_network”

There are many options here, and a read of the documentation pages is strongly recommended. They can be found at https://www.vagrantup.com/docs/vagrantfile

Also in this section you can configure provider-specific options. In this case the provider is libvirt, and the specific config looks like this:

 
config.vm.provider :libvirt do |libvirt|
  libvirt.cpus = 1
  libvirt.memory = 512

In the example above, all libvirt VMs will be created with a single CPU and 512Mb of memory unless specifically overridden.

The VM namespace is where you define all machines you want this Vagrantfile to build. Notice that this is still a part of the config section, and lines should therefore begin with ‘config’. All sections or parts of sections have an ‘end’ statement to close them off.

Creating multiple machines at once

Depending on what you need to achieve, this can be a simple loop or multiple machine definitions. To create any number of machines in a series, with the same settings but perhaps different names and/or IP addresses, you can just provide a range as shown here:

 
(1..5).each do |i|
  config.vm.define "server#{i}" do |server|
    server.vm.hostname = "server#{i}.example.com"
  end
end

This will create 5 servers, named server1, server2, server3 etc.

Of note, using Ruby style “for i in 1..3 do” doesn’t work despite Vagrantfile syntax actually being Ruby, so use the method from the example above.

If you need servers with different hostnames, different hardware etc then you’ll need to specify them individually, or at least in groups if the situation lends itself to that. Let’s say you need to create a typical web/db/load balancer infrastructure, with 2 web servers, a single database server and a load balancer for the web traffic. Ignoring the specific software setup for this, to simply create the virtual machines ready for provisioning you could use something like this:

 
# Load Balancer
config.vm.define "loadbal", primary: true do |loadbal|
  loadbal.vm.hostname = "loadbal"
end

# Database
config.vm.define "db", primary: true do |db|
  db.vm.hostname = "db"
end

# Web Servers x2
(1..2).each do |i|
  config.vm.define "web#{i}" do |web|
    web.vm.hostname = "web#{i}"
  end
end

This uses a combination of multiple machine calls and a small loop to build 4 VMs with a single ‘vagrant up’ command.

Networking

Vagrant generally creates its own network for VM access, and you use this with ‘vagrant ssh’. If you create more than one VM then you must use the VM name to identify which one you wish to connect to – vagrant ssh vmname.

There are a number of configuration options available which allow you to interact with your VMs in various ways.

The vagrant-libvirt plugin creates a network for the guests to use. This is automated and will always be present even if you define your own networks. The network is named “vagrant-libvirt” and can be seen either in the Virtual Networks tab of virt-manager’s connection details or by issuing a sudo virsh net-list command.

If you use dhcp for your guests, you can find the individual IP addresses with the virsh net-dhcp-list command: sudo virsh net-dhcp-leases vagrant-libvirt

Port Forwarding

The simplest change to default networking is port forwarding. This uses a simple format like most Vagrant config: config.vm.network “forwarded_port”, guest: 80, host: 8080

This listens to port 8080 on your local machine and forwards connections to port 80 on the Vagrant machine. If you need to use a UDP port, simply add , protocol: “udp” to the end of that line (notice that comma which should come immediately after the second port number).

Obviously for more complex configurations this might not be ideal, as you need to specify every single port you want to forward. If you then add multiple machines the complexity can really become too much.

In addition to this, anyone on your network can access these ports if they know your IP address, so that’s something you should be aware of.

Public Network

This creates a network card for the Vagrant VM which connects to your host network, and will therefore be visible to all machines on that network. As Vagrant is not designed to be secure, you should be aware of any vulnerabilities and take steps to protect against them.

To configure a public network, add config.vm.network “public_network” to your Vagrantfile. This will use DHCP to obtain a network address.

If you wish to assign a static IP address, you can add one to the end of the network declaration: config.vm.network “public_network”, ip: “192.168.0.1”

If you’re creating multiple guests you can put the network configuration in the vm namespace, and even allocate IPs based on iteration too:

 
Vagrant.configure("2") do |config|
  config.vm.box = "centos/8"
  config.vm.provider :libvirt do |libvirt|
    libvirt.qemu_use_session = false
  end

  # Servers x2
  (1..2).each do |i|
    config.vm.define "server#{i}" do |server|
      server.vm.hostname = "server#{i}"
      server.vm.network "public_network", ip: "192.168.122.20#{i}"
    end
  end
end

Private Network

This works very much like the Public Network option, only the network is only available to the host machine and the Vagrant guests. The syntax is almost identical too: config.vm.network “private_network”, type: “dhcp”

 To use a static IP address, simply add it:

 
config.vm.network "private_network", ip: "192.168.50.4"

This will create a new network in libvirt, usually named something like “vagrant-private-dhcp” – you can see this with the command sudo virsh net-list while the VM is running. This network is created and destroyed along with the vagrant guests.

Again, the network config can be specified for all guests, or per guest as shown in the public network example above.

Provisioning

Once you have your VMs defined, you can obviously then do whatever you want with them, but as soon as you issue a ‘vagrant destroy’ command any changes will be lost. This is where automated provisioning comes in.

You can use several methods to provision your machines, from simple file copies to shell scripts, Ansible, Chef and Puppet. Many of the main methods can be used, but I’ll cover the simple ones here – if you need to use something else please read the documentation as it’s all covered.

File uploads

To copy a file to the Vagrant guest, add a line to the Vagrantfile like this:

 
config.vm.provision "file", source: "~/myfile", destination: "myfile"

You can copy directories too:

 
config.vm.provision "file", source: "~/path/to/host/folder", destination: "$HOME/remote/newfolder"

The directory structure should already exist on the Vagrant host, and will be copied in its entirety, including subdirectories and files.

Note: If you add a trailing slash to the destination path, the source path will be placed under this so make sure you only do this if you want that outcome. For example, if the above destination was “$HOME/remote/newfolder/”, then the result would see “$HOME/remote/newfolder/folder” created with the contents of the source placed here.

Shell commands

You can include individual commands, inline scripts or external scripts to perform provisioning tasks.

A single command would take this form, and any valid command line command can be used here: config.vm.provision “shell”, inline: “sudo dnf update -y”

An inline script is less common, and declared at the top of the Vagrantfile then called during provisioning:

 
$script = <<-SCRIPT
echo I am provisioning...
date > /etc/vagrant_provisioned_at
SCRIPT

Vagrant.configure("2") do |config|
  config.vm.provision "shell", inline: $script
end

More common is the external shell script, which gives more flexibility and makes code more modular. Vagrant uploads the file to the guest then executes it. Simply call the script in the provisioning line:

config.vm.provision “shell”, path: “script.sh”

The file need not be local to the Vagrant host either:

config.vm.provision “shell”, path: “https://example.com/provisioner.sh”

Ansible

To use Ansible to provision your VMs you must have it installed on the Vagrant host; see https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-rhel-centos-or-fedora.

You specify an Ansible playbook to provision your VM in the following way:

 
config.vm.provision "ansible" do |ansible|
  ansible.playbook = "playbook.yml"
end

This then calls the playbook, which will run as any externally-run ansible playbook would.

If you’re building multiple VMs with your Vagrantfile then it’s likely you want different configurations for some of them, and in this case you should provision within the definition of each VM, as shown here:

 
# Web Servers x2
(1..2).each do |i|
  config.vm.define "web#{i}" do |web|
    web.vm.hostname = "web#{i}"
    web.vm.provision "ansible" do |ansible|
      ansible.playbook = "web.yml"
    end
  end
end

Ansible provisioners come in two formats – ansible and ansible_local. The ansible provisioner requires that Ansible is installed on the Vagrant host, and will connect remotely to your guest VMs to provision them. This means all necessary ssh authentication must be in place for it to work. The ansible_local provisioner executes playbooks directly on the guest VMs, which therefore requires Ansible be installed on each of the guests you want to provision. Vagrant will try to install Ansible on the guests in order to do this, (This can be controlled with the install option, but is enabled by default). On RHEL-style systems like Fedora, Ansible is installed from the EPEL repository. Simply use either ansible or ansible_local in the config_vm_provision command to choose the style you need.

Synced Folders

Vagrant allows you to sync folders between your Vagrant host and your guests, allowing access to configuration files, data etc. By default, the folder containing the Vagrant file is shared and mounted under /vagrant on each guest.

To configure additional synced folders, use the config.vm.synced.folder command:

 
config.vm.synced_folder "src/", "/srv/website"

The two parameters are the source folder on the Vagrant host and the mount directory on the guest. The destination folder will be created if it does not exist, recursively if necessary.

Options for synced folders allow you to configure them better, including the option to disable them completely. Other options allow you to specify a group owner of the folder (group), the folder owner (owner), plus mount options. There are others but these are the main ones.

You can disable the default share with the following command:

 
config.vm.synced_folder ".", "/vagrant", disabled: true

Other options are configured as follows:

 
config.vm.synced_folder "src/", "/srv/website",
  owner: "apache", group: "apache"

NFS synced folders

When using Vagrant on a Linux host, synced folders use NFS (with the exception of the default share which uses rsync; see below) so you must have NFS installed on the Vagrant host, and the guests also need NFS support installation. To use NFS with non-Linux hosts, simply specify the folder type as ‘nfs’:

 
config.vm.synced_folder ".", "/vagrant", type: "nfs"

RSync synced folders

These are the easiest to use as they usually work without any intervention on a Linux host. This is a one-way sync from host to guest performed at startup (vagrant up) or after a vagrant reload command is issued. The default share of the Vagrant project directory is done with rsync. To configure a synced folder with rsync, specify the type as ‘rsync’:

 
config.vm.synced_folder ".", "/vagrant", type: "rsync"

Posted on Leave a comment

Unreal Engine Asset Giveaway For December 2020

Every month Epic Games giveaways several assets on the Unreal Engine marketplace and December is no exception. This month we have 5 new assets that are available for free until the first Tuesday in January, but once “purchased” those assets are yours to use free forever. Speaking of forever, there is also one new asset in the permanently free collection.

This months free assets include:

This months permanently free asset is:

A common question with these assets is can the be used outside of Unreal Engine. Generally the answer is yes, unless the asset was owned or sourced directly by Epic Games, like the Power IK asset this month, in which case it can only be used in Unreal Engine projects. You can learn more about this months UE4 asset giveaway in the video below.

[youtube https://www.youtube.com/watch?v=mrX4ZpqUfVU?feature=oembed&w=1500&h=844]
Posted on Leave a comment

GDevelop Game Engine Revisted

We first looked at the GDevelop game engine back in 2017 in our Closer Look Game Engine series. In the intervening years, GDevelop 5 has come a long way, bringing more and more features to this impressive open source cross platform 2D game engine. In the past year there have been over a dozen new beta releases to the engine including several community contributions. There have also been some updates as a result of the 2020 Google Summer of Code. While many of these releases aren’t large enough to justify a video, taken as a whole it is certainly time to revisit this game engine and the improvements it has seen.

Some of the highlights of recent releases include:

  • add support for a new asset store with hundreds of ready made game objects
  • new analytics system without requiring a third party solution
  • better support for right to left languages
  • support for dynamic 2D lights
  • customizable keyboard shortcuts
  • peer to peer communication extension
  • live preview (hot reloading) support
  • command palette for quickly launching editors
  • new editor themes

These are just a few highlights of the dozens of releases over the last few months. If you are interested in checking out GDevelop it’s available for Windows, Mac, Linux and Online. It is also an open source project with the source code available on GitHub under the MIT open source license. If you want to learn more or run into problems, be sure to check out their Discord server. You can learn more about GDevelop and see it in action in the video below.

[youtube https://www.youtube.com/watch?v=0Vni4hXQAx8?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Getting started with Stratis – up and running

When adding storage to a Linux server, system administrators often use commands like pvcreate, vgcreate, lvcreate, and mkfs to integrate the new storage into the system. Stratis is a command-line tool designed to make managing storage much simpler. It creates, modifies, and destroys pools of storage. It also allocates and deallocates filesystems from the storage pools.

Instead of an entirely in-kernel approach like ZFS or Btrfs, Stratis uses a hybrid approach with components in both user space and kernel land. It builds on existing block device managers like device mapper and existing filesystems like XFS. Monitoring and control is performed by a user space daemon.

Stratis tries to avoid some ZFS characteristics like restrictions on adding new hard drives or replacing existing drives with bigger ones. One of its main design goals is to achieve a positive command-line experience.

Install Stratis

Begin by installing the required packages. Several Python-related dependencies will be automatically pulled in. The stratisd package provides the stratisd daemon which creates, manages, and monitors local storage pools. The stratis-cli package provides the stratis command along with several Python libraries.

# yum install -y stratisd stratis-cli

Next, enable the stratisd service.

# systemctl enable --now stratisd

Note that the “enable –now” syntax shown above both permanently enables and immediately starts the service.

After determining what disks/block devices are present and available, the three basic steps to using Stratis are:

  1. Create a pool of the desired disks.
  2. Create a filesystem in the pool.
  3. Mount the filesystem.

In the following example, four virtual disks are available in a virtual machine. Be sure not to use the root/system disk (/dev/vda in this example)!

# sfdisk -s
/dev/vda: 31457280
/dev/vdb:   5242880
/dev/vdc:   5242880
/dev/vdd:   5242880
/dev/vde:   5242880
total: 52428800 blocks

Create a storage pool using Stratis

# stratis pool create testpool /dev/vdb /dev/vdc
# stratis pool list
Name Total Physical Size  Total Physical Used
testpool 10 GiB 56 MiB

After creating the pool, check the status of its block devices:

# stratis blockdev list
Pool Name   Device Node Physical Size   State  Tier
testpool  /dev/vdb            5 GiB  In-use  Data
testpool  /dev/vdc            5 GiB  In-use  Data

Create a filesystem using Stratis

Next, create a filesystem. As mentioned earlier, Stratis uses the existing DM (device mapper) and XFS filesystem technologies to create thinly-provisioned filesystems. By building on these existing technologies, large filesystems can be created and it is possible to add physical storage as storage needs grow.

# stratis fs create testpool testfs
# stratis fs list
Pool Name  Name  Used Created        Device            UUID
testpool  testfs 546 MiB  Apr 18 2020 09:15 /stratis/testpool/testfs  095fb4891a5743d0a589217071ff71dc

Note that “fs” in the example above can optionally be written out as “filesystem”.

Mount the filesystem

Next, create a mount point and mount the filesystem.

# mkdir /testdir
# mount /stratis/testpool/testfs /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc  1.0T  7.2G 1017G   1% /testdir

The actual space used by a filesystem is shown using the stratis fs list command demonstrated previously. Notice how the testdir filesystem has a virtual size of 1.0T. If the data in a filesystem approaches its virtual size, and there is available space in the storage pool, Stratis will automatically grow the filesystem. Note that beginning with Fedora 34, the form of device path will be /dev/stratis/<pool-name>/<filesystem-name>.

Add the filesystem to fstab

To configure automatic mounting of the filesystem at boot time, run following commands:

# UUID=`lsblk -n -o uuid /stratis/testpool/testfs`
# echo "UUID=${UUID} /testdir xfs defaults 0 0" >> /etc/fstab

After updating fstab, verify that the entry is correct by unmounting and mounting the filesystem:

# umount /testdir
# mount /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc  1.0T  7.2G 1017G   1% /testdir

Adding cache devices with Stratis

Suppose /dev/vdd is an available SSD (solid state disk). To configure it as a cache device and check its status, use the following commands:

# stratis pool add-cache testpool  /dev/vdd
# stratis blockdev
Pool Name   Device Node Physical Size  State   Tier
testpool   /dev/vdb            5 GiB  In-use   Data
testpool   /dev/vdc            5 GiB  In-use   Data
testpool   /dev/vdd            5 GiB  In-use  Cache

Growing the storage pool

Suppose the testfs filesystem is close to using all the storage capacity of testpool. You could add an additional disk/block device to the pool with commands similar to the following:

# stratis pool add-data testpool /dev/vde
# stratis blockdev
Pool Name Device Node Physical Size   State   Tier
testpool   /dev/vdb           5 GiB  In-use   Data
testpool   /dev/vdc           5 GiB  In-use   Data
testpool   /dev/vdd           5 GiB  In-use  Cache
testpool   /dev/vde           5 GiB  In-use   Data

After adding the device, verify that the pool shows the added capacity:

# stratis pool
Name      Total Physical Size   Total Physical Used
testpool             15 GiB           606 MiB

Conclusion

Stratis is a tool designed to make managing storage much simpler. Creating a filesystem with enterprise functionalities like thin-provisioning, snapshots, volume management, and caching can be accomplished quickly and easily with just a few basic commands.

See also Getting Started with Stratis Encryption.

Posted on Leave a comment

Hands-On With Sound Particles

Sound Particles is a one of a kind program for rendering 3D audio, capable of supporting thousands to millions of sounds in your simulation. Sound Particles has been battle tested used in big budgets movies such as Alita, Ready Player One and the new Star Wars films, as well as games such as Assassins Creed Origin.

Features of Sound Particles:

Sound Particles is a sound design software application capable of generating thousands (even millions) of sounds in a virtual 3D audio world. This immersive audio application will enable you to create highly complex sounds on the fly, which will ultimately enable you to design sound better and faster than ever.

Sound Design
The best 3D software for complex sound design. Used for film, videogames and virtual reality.

Postproduction
Working in postproduction? Use Sound Particles to add depth and richness to your sounds.

Immersive Audio
Supports immersive audio formats, such as Ambisonics, Dolby Atmos, Auro 3d and much more.

If you are interested in trying this unique audio application they have a fully functional demonstration available for Windows and Mac available here. All licenses are currently 50% off during Black Friday/Cyber Monday from indie to enterprise licenses. You can see Sound Particles in action in the video below. Sounds used during this demo were all downloaded from the excellent FreeSound.org website.

[youtube https://www.youtube.com/watch?v=mnoQPqKRHWo?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Black Friday For Game Developers 2020

Every year here on GameFromScratch we gather all of the relevant game development related Black Friday and Cyber Monday deals and 2020 is no exception. What follows is a list of the best deals we have found for game developers, be it programmers, musicians or artists. It will be updated as additional deals are discovered so be sure to check back. Links below may contain an affiliate code that pays GFS a small commission if you make a purchase.

Software

Applications, Game Engines & Assets

Unity Cyber Week Sale

A collection of 700+ Assets 50% off as well as an asset of the day 70% off. Additionally Unity Pro and Unity Enterprise licenses come with a free gift up to $1600 value.

Unreal Engine

The Unreal Engine marketplace has their Black Friday deal on until January 2nd with hundreds of assets up to 70% off.

3D Coat

Get the 3D Coat 3D modelling and texturing application with a $100 discount and get 3D Coat 2021 individual free from Nov 26th until 30th.

Adobe

Get 25% off all app subscriptions, Nov 27th only

APress Books

Apress Black Friday/Cyber Monday deal offers all books for $6.99 each with code CYBER20AP.

Affinity

All Affinity Products such as Designer and Photo 30% off. Learn more about Affinity Designer here.

Allegorithmic

Substance Indie subscriptions 32% off.

Autodesk

Save 25% on software, including 3DS Max and Maya Subscriptions. Save up to $4,600 on bundle.

A Sound Effect

Sales across the spectrum of the indie sound effect creators and a exclusive bonus pack on purchase.

BlenderMarket

Flat 25% off on all assets.

CG Trader

Up to 70% off 3D models.

Clip Paint Studio

Up to 50% off this anime style painting application through cyber monday.

Corel

Save up to 60% off Corel products including Painter, Corel Draw and PaintShop Pro.

Daz3D

Get 50% off models store wide.

Flipped Normals

Get 50% off site wide on models, courses, etc.

FL Studio

FL Studio all plugins edition is 55% off.

Foundry

Get Modo subscriptions 30% off through Cyber Monday

Huion

Get up to 30% off and free shipping on Huion graphic tablets.

Humble Bundle

Not technically a Black Friday sale, but there are currently 3 bundles running on Humble of interest to game developers between now and Cyber Monday.

MAGIX

Get 30%+ off on Vegas Pro and Music Maker. Save $549 on Acid Pro 10 Suite + Sound Forge Pro combo.

No Starch Press

Books 33.7% off and free shipping with code BLACKFRIDAY20, conditions apply.

Phaser Editor

Phaser Editor (learn more here) is 50% off all licenses.

Pluralsight

Get 40% off personal and premium subscriptions on PluralSight training.

RealAllusion

Buy 2 get 50% off and other items 50% off through Cyber Monday.

Sketchfab

Use code BLACKFRIDAY to save 30% off on 3D models through Monday.

Smith Micro

Steam Autumn Sale (Store Link)

Steam is also having their annual autumn sale with tons of game development software available at discounted pricing.

TurboSquid

Save 40-50% off 3D models through Monday.

Voxengo

This one runs until the end of the year, get audio plugins and premium memberships at a 25% discount, with higher discounts the more items you purchase.

Yoyo Games

Get 20% off GameMaker licenses until December 1st.


Hardware

Computers & Devices

Acer/Asus/MSI — Nothing in North America

Dell

Dell has a wide variety of laptops including XPS and Alienware on sale as well as random door crasher deals throughout the day.

HP

HP are running door crasher deals throughout the day, in addition get 10% off PCs with the code 10STACKBFCM21.

Lenovo

Doorcrasher laptop sales up to 70% off, random sales across the site.

Microsoft Store

Up to $400 off Surface tablets, $500 on Surface Laptops, plus misc other Windows laptops on sale

NewEgg

Again, just a broad sale across a lot of computer hardware and accessories.

Razer

Up to 50% off a wide swath of Razer stuff, from keyboards to laptops. Laptop deals seem to kinda suck this year.

Tiger Direct

Lots of gear, computers etc on sale, free shipping.

Amazon

You Know… It’s Amazon

You can learn more about the above sales in the video below. Leave a comment on the video or the GFS discord if you spot another deal you want to share with your fellow game developers and I will add it to the list.

[youtube https://www.youtube.com/watch?v=I7GwDkTo4aM?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Getting started with Fedora CoreOS

This has been called the age of DevOps, and operating systems seem to be getting a little bit less attention than tools are. However, this doesn’t mean that there has been no innovation in operating systems. [Edit: The diversity of offerings from the plethora of distributions based on the Linux kernel is a fine example of this.] Fedora CoreOS has a specific philosophy of what an operating system should be in this age of DevOps.

Fedora CoreOS’ philosophy

Fedora CoreOS (FCOS) came from the merging of CoreOS Container Linux and Fedora Atomic Host. It is a minimal and monolithic OS focused on running containerized applications. Security being a first class citizen, FCOS provides automatic updates and comes with SELinux hardening.

For automatic updates to work well they need to be very robust. The goal being that servers running FCOS won’t break after an update. This is achieved by using different release streams (stable, testing and next). Each stream is released every 2 weeks and content is promoted from one stream to the other (next -> testing -> stable). That way updates landing in the stable stream have had the opportunity to be tested over a long period of time.

Getting Started

For this example let’s use the stable stream and a QEMU base image that we can run as a virtual machine. You can use coreos-installer to download that image.

From your (Workstation) terminal, run the following commands after updating the link to the image. [Edit: On Silverblue the container based coreos tools are the simplest method to try. Instructions can be found at https://docs.fedoraproject.org/en-US/fedora-coreos/tutorial-setup/ , in particular “Setup with Podman or Docker”.]

$ sudo dnf install coreos-installer
$ coreos-installer download --image-url https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.20200907.3.0/x86_64/fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2.xz
$ xz -d fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2.xz
$ ls
fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2

Create a configuration

To customize a FCOS system, you need to provide a configuration file that will be used by Ignition to provision the system. You may use this file to configure things like creating a user, adding a trusted SSH key, enabling systemd services, and more.

The following configuration creates a ‘core’ user and adds an SSH key to the authorized_keys file. It is also creating a systemd service that uses podman to run a simple hello world container.

version: "1.0.0"
variant: fcos
passwd: users: - name: core ssh_authorized_keys: - ssh-ed25519 my_public_ssh_key_hash fcos_key
systemd: units: - contents: | [Unit] Description=Run a hello world web service After=network-online.target Wants=network-online.target [Service] ExecStart=/bin/podman run --pull=always --name=hello --net=host -p 8080:8080 quay.io/cverna/hello ExecStop=/bin/podman rm -f hello [Install] WantedBy=multi-user.target enabled: true name: hello.service

After adding your SSH key in the configuration save it as config.yaml. Next use the Fedora CoreOS Config Transpiler (fcct) tool to convert this YAML configuration into a valid Ignition configuration (JSON format).

Install fcct directly from Fedora’s repositories or get the binary from GitHub.

$ sudo dnf install fcct
$ fcct -output config.ign config.yaml

Install and run Fedora CoreOS

To run the image, you can use the libvirt stack. To install it on a Fedora system using the dnf package manager

$ sudo dnf install @virtualization

Now let’s create and run a Fedora CoreOS virtual machine

$ chcon --verbose unconfined_u:object_r:svirt_home_t:s0 config.ign
$ virt-install --name=fcos \
--vcpus=2 \
--ram=2048 \
--import \
--network=bridge=virbr0 \
--graphics=none \
--qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=${PWD}/config.ign" \
--disk=size=20,backing_store=${PWD}/fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2

Once the installation is successful, some information is displayed and a login prompt is provided.

Fedora CoreOS 32.20200907.3.0
Kernel 5.8.10-200.fc32.x86_64 on an x86_64 (ttyS0)
SSH host key: SHA256:BJYN7AQZrwKZ7ZF8fWSI9YRhI++KMyeJeDVOE6rQ27U (ED25519)
SSH host key: SHA256:W3wfZp7EGkLuM3z4cy1ZJSMFLntYyW1kqAqKkxyuZrE (ECDSA)
SSH host key: SHA256:gb7/4Qo5aYhEjgoDZbrm8t1D0msgGYsQ0xhW5BAuZz0 (RSA)
ens2: 192.168.122.237 fe80::5054:ff:fef7:1a73
Ignition: user provided config was applied
Ignition: wrote ssh authorized keys file for user: core

The Ignition configuration file did not provide any password for the core user, therefore it is not possible to login directly via the console. (Though, it is possible to configure a password for users via Ignition configuration.)

Use Ctrl + ] key combination to exit the virtual machine’s console. Then check if the hello.service is running.

$ curl http://192.168.122.237:8080
Hello from Fedora CoreOS!

Using the preconfigured SSH key, you can also access the VM and inspect the services running on it.

$ ssh core@192.168.122.237
$ systemctl status hello
● hello.service - Run a hello world web service
Loaded: loaded (/etc/systemd/system/hello.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-28 10:10:26 UTC; 42s ago

zincati, rpm-ostree and automatic updates

The zincati service drives rpm-ostreed with automatic updates.
Check which version of Fedora CoreOS is currently running on the VM, and check if Zincati has found an update.

$ ssh core@192.168.122.237
$ rpm-ostree status
State: idle
Deployments:
● ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20200907.3.0 (2020-09-23T08:16:31Z)
Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
$ systemctl status zincati
● zincati.service - Zincati Update Agent
Loaded: loaded (/usr/lib/systemd/system/zincati.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-28 13:36:23 UTC; 7s ago
…
Oct 28 13:36:24 cosa-devsh zincati[1013]: [INFO ] initialization complete, auto-updates logic enabled
Oct 28 13:36:25 cosa-devsh zincati[1013]: [INFO ] target release '32.20201004.3.0' selected, proceeding to stage it ... zincati reboot ...

After the restart, let’s remote login once more to check the new version of Fedora CoreOS.

$ ssh core@192.168.122.237
$ rpm-ostree status
State: idle
Deployments:
● ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20201004.3.0 (2020-10-19T17:12:33Z)
Commit: 64bb377ae7e6949c26cfe819f3f0bd517596d461e437f2f6e9f1f3c24376fd30
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20200907.3.0 (2020-09-23T08:16:31Z)
Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0

rpm-ostree status now shows 2 versions of Fedora CoreOS, the one that came in the QEMU image, and the latest one received from the update. By having these 2 versions available, it is possible to rollback to the previous version using the rpm-ostree rollback command.

Finally, you can make sure that the hello service is still running and serving content.

$ curl http://192.168.122.237:8080
Hello from Fedora CoreOS!

More information: Fedora CoreOS updates

Deleting the Virtual Machine

To clean up afterwards, the following commands will delete the VM and associated storage.

$ virsh destroy fcos
$ virsh undefine --remove-all-storage fcos

Conclusion

Fedora CoreOS provides a solid and secure operating system tailored to run applications in containers. It excels in a DevOps environment which encourages the hosts to be provisioned using declarative configuration files. Automatic updates and the ability to rollback to a previous version of the OS, bring a peace of mind during the operation of a service.

Learn more about Fedora CoreOS by following the tutorials available in the project’s documentation.

Posted on Leave a comment

Godot On Steam Using GodotSteam

If you are creating a commercial PC game using Godot there is a good chance you are going to want to publish on Steam. If that is a case if your game requires any network services such as achievements, a leaderboard or DLC you are probably tempted to use Steam’s own Steamworks suite of APIs. In that case you most likely want to know about GodotSteam an open source implementation of the SteamWorks API for Godot 2/3, providing convenient GDScript interfaces for the vast majority of the Steamworks features.

GodotSteam is an open source project hosted on GitHub that is implemented using the Godot module system. The source code is under the flexible and permissive MIT license. There is a GDNative branch available although sadly it appears to have been abandoned. Being a module means you will have to download and build your own version of Godot, a process I describe in this video. If the world of Godot, modules and GDNative are all new to you, don’t worry, we have an overview available here.

If you want to get started with GodotSteam there are excellent tutorials and comprehensive documentation available here. You can learn more about Godot, Steamworks and GodotSteam in the video below.

Posted on Leave a comment

Quixel Mixer 2020.1.6 Released

Hot on the heels of the Quixel Bridge release, today Quixel released version 2020.1.6 of Quixel Mixer. Quixel Mixer is a texture generation tool that is completely free for everybody and includes MegaScans integration for Unreal Engine users. The 2020.1.6 release adds the ability to export masks, as well as 65 new free smart materials.

Details of the release from the Quixel blog:

Following the support of 3D Texturing and Smart Materials, this Quixel Mixer 2020.1.6 adds 65 new scan-based Smart Materials along with a powerful new feature: advanced mask export. This highly requested feature enables you to combine, channel pack and export advanced masks, leveraging Mixer’s versatile mask stack and material blending engine.

The ability to utilize these masks in other applications allows you to easily create high-quality variations of your materials directly inside the app of your choice.

Mixer is available for everyone, for free, forever, including its enormous base library of hundreds of free scans and Smart Materials. What’s more — Unreal Engine users have access to the entire Megascans library for free, right within Mixer. 

Quixel is available for download here for Windows and Mac OS. You can learn more about the 2020.1.6 release in the video below.

[youtube https://www.youtube.com/watch?v=KtRcoxqOfaM?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Blender 2.91 Released

Blender 2.91 was released today another step forward in the rapidly improving open source 3D application. As with other recent releases this one includes several sculpting improvements, especially on the cloth brushes including the ability to collide with other objects in your scene. Other sculpting improvements include several new gesture tools, support for sculpting on the base mesh of a multi-res mesh and the addition of boundary brushes to control the edges of sculpted meshes.

Sculpting Cloth In Blender
Blender 2.91 Cloth Sculpting

Another major feature includes improved Boolean support including a new exact solver as well as the ability to perform boolean operations on collections of objects. The new exact solver is much more accurate but at the cost of running slower. This improvement is a welcome one, as the boolean functionality in Blender 2.8x was one of the few areas where it was worse than the previous releases.

In addition to the improving volumetric support in the form of openVDB support, Blender 2.91 also has the ability to generate volumes from meshes, as well as apply displacements to those volumes. There are a number of other improvements in Blender 2.91 from EEVEE to Grease Pencil. Learn more about the release in the release notes.

You can learn more about the Blender release, including several new features demonstrated in the video below. The 2.91 splash screen is the work of Robin Tran, a concept artist at UbiSoft Massive, you can see more shots here. Blender is available on all major platforms as a free download here, assuming of course their servers are currently on fire due to demand! If you are interested at looking even further into the future of Blender, Blender 2.92 is currently available here in alpha( soon to be beta) form.

[youtube https://www.youtube.com/watch?v=1bs5ujDz_9M?feature=oembed&w=1500&h=844]