Way back in 2018 Unity announced the official deprecation of the UNET networking solution, which consisted of the low level (LLAPI) and high level (HLAPI) networking solutions. The reasons for the deprecation were:
Through our connected games initiatives, we’re revamping how we can make networked games easier, more performant, and multiplayer-ready by default. To make these important changes, we need to start anew. That means existing multiplayer features will be gradually deprecated, with more performant, scalable, and secure technologies taking their place. But don’t worry – games with impacted features will have plenty of time to react.
At this point in time the future was clearly DOTS (Data Oriented Technology Stack) which included a new solution called Unity NetCode. In the end the migration to DOTS hasn’t gone as smoothly as expected and recently Unity have started back-filling support for GameObject based development. One recent example was the acquisition, then subsequent free release of the Bolt visual scripting solution. Today Unity have made a similar move, by adopting the open source MLAPI networking project as the new official Unity networking solution. Details from the Unity blog:
One of Unity’s top priorities for 2021 is to expand the Unity ecosystem with a first-party multiplayer networking solution for GameObjects that is easy to set up and extend, scales to meet the needs of high-performance titles, and is seamlessly integrated into the Unity ecosystem.
The existing UNet HLAPI architecture is not well suited for the in-depth evolution that is required to support games at scale. Rest assured, we don’t want to reinvent the wheel. The ecosystem currently offers multiple strong solutions, and the best path toward providing you with the scalable framework we envision is to build on the amazing work that already exists in the community.
We considered various open source software (OSS) alternatives and found a framework that fit our needs. We’re thrilled to share that the OSS multiplayer networking framework MLAPI is joining the Unity family, along with its creator, Albin Corén.
As of today, we’re already working on integrating and evolving MLAPI into what will become Unity’s first-party GameObjects netcode solution. We plan to continue the development fully open source. Developing in the open and welcoming community contributions. If you are interested, you can join us on the GitHub MLAPI repo.
You can learn more about MLAPI and the ongoing saga of networking on Unity in the video below.
FMOD, perhaps the most popular audio middleware solution for games, just updated their indie developer licenses effectively making the use of FMOD free for smaller indie game developers. So what defines an indie game developer here? First you need to make less than $200K gross revenue per year and second, you need to have less than $500K USD in funding for your game title. There are also some limitations on industry, so for example gambling and simulation projects do not qualify for this license.
The primary details of this announcement came via this tweet:
The key paragraph from the linked legal document is the following:
This EULA grants you the right to use FMOD Studio Engine, for Commercial use, subject to the following:
Total gross revenue / funding per year for the developer, before expenses, is less than $200k (Refer to www.fmod.com/licensing#licensing-faq for information);
FMOD Studio Engine is integrated and redistributed in a game application (Product) only;
FMOD Studio Engine is not distributed as part of a game engine or tool set;
Product includes attribution in accordance with Clause 3.
More details about the licensing changes are available here. FMod has support for several game engines including Unreal and Unity, if you are a Godot developer there is FMOD support available via this project, as well as a GDNative version available here. To learn more about FMOD and the new indie licensing check out the video below.
Epic Games have just release UE 4.26. In this release we see features such as hair and anisotrophy reach production ready status. Additionally there is a new water simulation system (previewed here) and better integration of the new Chaos Physics System (tutorial here) and a brand new system for creating better skies, lighting and environmental clouds. Additionally there were several advancements on the film making side of the equation along side hundreds of other small improvements and bug fixes. With each new Unreal Engine release more and more functionality traditionally done in your DCC tool of choice such as modelling, rigging, animating and sculpting are being added to Unreal.
A summary of new features from the Unreal Engine 4.26 release notes:
The production-ready Hair, Fur, and Feathers system enables you to design the most believable humans and animals. You can use the Volumetric Cloud component along with the Sky Atmosphere and Sky Light to author and render realistic or stylized skies, clouds, and other atmospheric effects with full artistic freedom. The new Water System makes it possible to create believable bodies of water within your landscape terrains that react with your characters, vehicles, and weapons. With an improved and expanded feature set, Chaos physics now lets you simulate Vehicles, Cloth, and Ragdolls in addition to Rigid Bodies so every aspect of the environment comes to life.
Sequencer now works in conjunction with Control Rig and the new full-body IK solution to create new animations inside of Sequencer, reducing the need to use external tools. Movie Render Queue (formerly known as High Quality Media Export) has been enhanced to support render passes enabling further enhancements to be made to the final image in a downstream compositing application. nDisplay multi-display rendering is easier to set up and configure in addition to enabling more pixels to be rendered at a higher frame rate, thus increasing performance and supporting larger LED volumes with existing hardware. The Collaborative Viewer Template has been significantly improved to enhance the collaborative design review experience, and enable more users to join a session. The Remote Control API has been improved to seamlessly connect properties and functionality in Unreal Editor to UI widgets giving users the ability to quickly change properties from an external device, such as an artist on stage changing the sky rotation or the sun position from an iPad.
In the video below we take a quick look at the new water system as well as a quick tutorial on creating Hair alembic files using Blender for export to Unreal Engine, then quickly showcase the new Groom hair functionality.
Over on the Godot Engine blog they recently put together a summary of some of the exciting new 2D features that will arrive in Godot 4. Today we go hands-on with the majority of these new features. In addition to the new tricks showcased in the video there are other 2D improvements in Godot 4 including across the board performance improvements (due to the new Vulkan renderer and internal optimizations) as well as support for 2D signed distance fields.
Highlighted new features include:
new 2D CanvasTexture with support for diffuse, normal and specular maps
better support for 2D lights (all drawn in a single pass)
directional light and shadow support
new child clipping feature
new CanvasGroup for grouping multiple sprites into a single draw call
In addition to these new features, Godot have also just released an update on the improvements to tilemap support, which will be covered in a separate video. Check out the video below to see these excellent new Godot 2D features in action.
There are, like most things in the Unix/Linux world, many ways of doing things with Vagrant, but here are some examples of ways to grow your Vagrantfile portfolio and increase your knowledge and use.
If you have not yet installed vagrant you can follow the first part of this series.
Also in this section you can configure provider-specific options. In this case the provider is libvirt, and the specific config looks like this:
config.vm.provider :libvirt do |libvirt|
libvirt.cpus = 1
libvirt.memory = 512
In the example above, all libvirt VMs will be created with a single CPU and 512Mb of memory unless specifically overridden.
The VM namespace is where you define all machines you want this Vagrantfile to build. Notice that this is still a part of the config section, and lines should therefore begin with ‘config’. All sections or parts of sections have an ‘end’ statement to close them off.
Creating multiple machines at once
Depending on what you need to achieve, this can be a simple loop or multiple machine definitions. To create any number of machines in a series, with the same settings but perhaps different names and/or IP addresses, you can just provide a range as shown here:
(1..5).each do |i|
config.vm.define "server#{i}" do |server|
server.vm.hostname = "server#{i}.example.com"
end
end
This will create 5 servers, named server1, server2, server3 etc.
Of note, using Ruby style “for i in 1..3 do” doesn’t work despite Vagrantfile syntax actually being Ruby, so use the method from the example above.
If you need servers with different hostnames, different hardware etc then you’ll need to specify them individually, or at least in groups if the situation lends itself to that. Let’s say you need to create a typical web/db/load balancer infrastructure, with 2 web servers, a single database server and a load balancer for the web traffic. Ignoring the specific software setup for this, to simply create the virtual machines ready for provisioning you could use something like this:
# Load Balancer
config.vm.define "loadbal", primary: true do |loadbal|
loadbal.vm.hostname = "loadbal"
end
# Database
config.vm.define "db", primary: true do |db|
db.vm.hostname = "db"
end
# Web Servers x2
(1..2).each do |i|
config.vm.define "web#{i}" do |web|
web.vm.hostname = "web#{i}"
end
end
This uses a combination of multiple machine calls and a small loop to build 4 VMs with a single ‘vagrant up’ command.
Networking
Vagrant generally creates its own network for VM access, and you use this with ‘vagrant ssh’. If you create more than one VM then you must use the VM name to identify which one you wish to connect to – vagrant ssh vmname.
There are a number of configuration options available which allow you to interact with your VMs in various ways.
The vagrant-libvirt plugin creates a network for the guests to use. This is automated and will always be present even if you define your own networks. The network is named “vagrant-libvirt” and can be seen either in the Virtual Networks tab of virt-manager’s connection details or by issuing a sudo virsh net-list command.
If you use dhcp for your guests, you can find the individual IP addresses with the virsh net-dhcp-list command: sudo virsh net-dhcp-leases vagrant-libvirt
Port Forwarding
The simplest change to default networking is port forwarding. This uses a simple format like most Vagrant config: config.vm.network “forwarded_port”, guest: 80, host: 8080
This listens to port 8080 on your local machine and forwards connections to port 80 on the Vagrant machine. If you need to use a UDP port, simply add , protocol: “udp” to the end of that line (notice that comma which should come immediately after the second port number).
Obviously for more complex configurations this might not be ideal, as you need to specify every single port you want to forward. If you then add multiple machines the complexity can really become too much.
In addition to this, anyone on your network can access these ports if they know your IP address, so that’s something you should be aware of.
Public Network
This creates a network card for the Vagrant VM which connects to your host network, and will therefore be visible to all machines on that network. As Vagrant is not designed to be secure, you should be aware of any vulnerabilities and take steps to protect against them.
To configure a public network, add config.vm.network “public_network” to your Vagrantfile. This will use DHCP to obtain a network address.
If you wish to assign a static IP address, you can add one to the end of the network declaration: config.vm.network “public_network”, ip: “192.168.0.1”
If you’re creating multiple guests you can put the network configuration in the vm namespace, and even allocate IPs based on iteration too:
Vagrant.configure("2") do |config|
config.vm.box = "centos/8"
config.vm.provider :libvirt do |libvirt|
libvirt.qemu_use_session = false
end
# Servers x2
(1..2).each do |i|
config.vm.define "server#{i}" do |server|
server.vm.hostname = "server#{i}"
server.vm.network "public_network", ip: "192.168.122.20#{i}"
end
end
end
Private Network
This works very much like the Public Network option, only the network is only available to the host machine and the Vagrant guests. The syntax is almost identical too: config.vm.network “private_network”, type: “dhcp”
This will create a new network in libvirt, usually named something like “vagrant-private-dhcp” – you can see this with the command sudo virsh net-list while the VM is running. This network is created and destroyed along with the vagrant guests.
Again, the network config can be specified for all guests, or per guest as shown in the public network example above.
Provisioning
Once you have your VMs defined, you can obviously then do whatever you want with them, but as soon as you issue a ‘vagrant destroy’ command any changes will be lost. This is where automated provisioning comes in.
You can use several methods to provision your machines, from simple file copies to shell scripts, Ansible, Chef and Puppet. Many of the main methods can be used, but I’ll cover the simple ones here – if you need to use something else please read the documentation as it’s all covered.
File uploads
To copy a file to the Vagrant guest, add a line to the Vagrantfile like this:
The directory structure should already exist on the Vagrant host, and will be copied in its entirety, including subdirectories and files.
Note: If you add a trailing slash to the destination path, the source path will be placed under this so make sure you only do this if you want that outcome. For example, if the above destination was “$HOME/remote/newfolder/”, then the result would see “$HOME/remote/newfolder/folder” created with the contents of the source placed here.
Shell commands
You can include individual commands, inline scripts or external scripts to perform provisioning tasks.
A single command would take this form, and any valid command line command can be used here: config.vm.provision “shell”, inline: “sudo dnf update -y”
An inline script is less common, and declared at the top of the Vagrantfile then called during provisioning:
$script = <<-SCRIPT
echo I am provisioning...
date > /etc/vagrant_provisioned_at
SCRIPT
Vagrant.configure("2") do |config|
config.vm.provision "shell", inline: $script
end
More common is the external shell script, which gives more flexibility and makes code more modular. Vagrant uploads the file to the guest then executes it. Simply call the script in the provisioning line:
config.vm.provision “shell”, path: “script.sh”
The file need not be local to the Vagrant host either:
You specify an Ansible playbook to provision your VM in the following way:
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
This then calls the playbook, which will run as any externally-run ansible playbook would.
If you’re building multiple VMs with your Vagrantfile then it’s likely you want different configurations for some of them, and in this case you should provision within the definition of each VM, as shown here:
# Web Servers x2
(1..2).each do |i|
config.vm.define "web#{i}" do |web|
web.vm.hostname = "web#{i}"
web.vm.provision "ansible" do |ansible|
ansible.playbook = "web.yml"
end
end
end
Ansible provisioners come in two formats – ansible and ansible_local. The ansible provisioner requires that Ansible is installed on the Vagrant host, and will connect remotely to your guest VMs to provision them. This means all necessary ssh authentication must be in place for it to work. The ansible_local provisioner executes playbooks directly on the guest VMs, which therefore requires Ansible be installed on each of the guests you want to provision. Vagrant will try to install Ansible on the guests in order to do this, (This can be controlled with the install option, but is enabled by default). On RHEL-style systems like Fedora, Ansible is installed from the EPEL repository. Simply use either ansible or ansible_local in the config_vm_provision command to choose the style you need.
Synced Folders
Vagrant allows you to sync folders between your Vagrant host and your guests, allowing access to configuration files, data etc. By default, the folder containing the Vagrant file is shared and mounted under /vagrant on each guest.
To configure additional synced folders, use the config.vm.synced.folder command:
config.vm.synced_folder "src/", "/srv/website"
The two parameters are the source folder on the Vagrant host and the mount directory on the guest. The destination folder will be created if it does not exist, recursively if necessary.
Options for synced folders allow you to configure them better, including the option to disable them completely. Other options allow you to specify a group owner of the folder (group), the folder owner (owner), plus mount options. There are others but these are the main ones.
You can disable the default share with the following command:
When using Vagrant on a Linux host, synced folders use NFS (with the exception of the default share which uses rsync; see below) so you must have NFS installed on the Vagrant host, and the guests also need NFS support installation. To use NFS with non-Linux hosts, simply specify the folder type as ‘nfs’:
These are the easiest to use as they usually work without any intervention on a Linux host. This is a one-way sync from host to guest performed at startup (vagrant up) or after a vagrant reload command is issued. The default share of the Vagrant project directory is done with rsync. To configure a synced folder with rsync, specify the type as ‘rsync’:
Every month Epic Games giveaways several assets on the Unreal Engine marketplace and December is no exception. This month we have 5 new assets that are available for free until the first Tuesday in January, but once “purchased” those assets are yours to use free forever. Speaking of forever, there is also one new asset in the permanently free collection.
A common question with these assets is can the be used outside of Unreal Engine. Generally the answer is yes, unless the asset was owned or sourced directly by Epic Games, like the Power IK asset this month, in which case it can only be used in Unreal Engine projects. You can learn more about this months UE4 asset giveaway in the video below.
We first looked at the GDevelop game engine back in 2017 in our Closer Look Game Engine series. In the intervening years, GDevelop 5 has come a long way, bringing more and more features to this impressive open source cross platform 2D game engine. In the past year there have been over a dozen new beta releases to the engine including several community contributions. There have also been some updates as a result of the 2020 Google Summer of Code. While many of these releases aren’t large enough to justify a video, taken as a whole it is certainly time to revisit this game engine and the improvements it has seen.
add support for a new asset store with hundreds of ready made game objects
new analytics system without requiring a third party solution
better support for right to left languages
support for dynamic 2D lights
customizable keyboard shortcuts
peer to peer communication extension
live preview (hot reloading) support
command palette for quickly launching editors
new editor themes
These are just a few highlights of the dozens of releases over the last few months. If you are interested in checking out GDevelop it’s available for Windows, Mac, Linux and Online. It is also an open source project with the source code available on GitHub under the MIT open source license. If you want to learn more or run into problems, be sure to check out their Discord server. You can learn more about GDevelop and see it in action in the video below.
When adding storage to a Linux server, system administrators often use commands like pvcreate, vgcreate, lvcreate, and mkfs to integrate the new storage into the system. Stratis is a command-line tool designed to make managing storage much simpler. It creates, modifies, and destroys pools of storage. It also allocates and deallocates filesystems from the storage pools.
Instead of an entirely in-kernel approach like ZFS or Btrfs, Stratis uses a hybrid approach with components in both user space and kernel land. It builds on existing block device managers like device mapper and existing filesystems like XFS. Monitoring and control is performed by a user space daemon.
Stratis tries to avoid some ZFS characteristics like restrictions on adding new hard drives or replacing existing drives with bigger ones. One of its main design goals is to achieve a positive command-line experience.
Install Stratis
Begin by installing the required packages. Several Python-related dependencies will be automatically pulled in. The stratisd package provides the stratisd daemon which creates, manages, and monitors local storage pools. The stratis-cli package provides the stratis command along with several Python libraries.
# yum install -y stratisd stratis-cli
Next, enable the stratisd service.
# systemctl enable --now stratisd
Note that the “enable –now” syntax shown above both permanently enables and immediately starts the service.
After determining what disks/block devices are present and available, the three basic steps to using Stratis are:
Create a pool of the desired disks.
Create a filesystem in the pool.
Mount the filesystem.
In the following example, four virtual disks are available in a virtual machine. Be sure not to use the root/system disk (/dev/vda in this example)!
# stratis pool create testpool /dev/vdb /dev/vdc
# stratis pool list
Name Total Physical Size Total Physical Used
testpool 10 GiB 56 MiB
After creating the pool, check the status of its block devices:
# stratis blockdev list
Pool Name Device Node Physical Size State Tier
testpool /dev/vdb 5 GiB In-use Data
testpool /dev/vdc 5 GiB In-use Data
Create a filesystem using Stratis
Next, create a filesystem. As mentioned earlier, Stratis uses the existing DM (device mapper) and XFS filesystem technologies to create thinly-provisioned filesystems. By building on these existing technologies, large filesystems can be created and it is possible to add physical storage as storage needs grow.
# stratis fs create testpool testfs
# stratis fs list
Pool Name Name Used Created Device UUID
testpool testfs 546 MiB Apr 18 2020 09:15 /stratis/testpool/testfs 095fb4891a5743d0a589217071ff71dc
Note that “fs” in the example above can optionally be written out as “filesystem”.
Mount the filesystem
Next, create a mount point and mount the filesystem.
# mkdir /testdir
# mount /stratis/testpool/testfs /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc 1.0T 7.2G 1017G 1% /testdir
The actual space used by a filesystem is shown using the stratis fs list command demonstrated previously. Notice how the testdir filesystem has a virtual size of 1.0T. If the data in a filesystem approaches its virtual size, and there is available space in the storage pool, Stratis will automatically grow the filesystem. Note that beginning with Fedora 34, the form of device path will be /dev/stratis/<pool-name>/<filesystem-name>.
Add the filesystem to fstab
To configure automatic mounting of the filesystem at boot time, run following commands:
After updating fstab, verify that the entry is correct by unmounting and mounting the filesystem:
# umount /testdir
# mount /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc 1.0T 7.2G 1017G 1% /testdir
Adding cache devices with Stratis
Suppose /dev/vdd is an available SSD (solid state disk). To configure it as a cache device and check its status, use the following commands:
# stratis pool add-cache testpool /dev/vdd
# stratis blockdev
Pool Name Device Node Physical Size State Tier
testpool /dev/vdb 5 GiB In-use Data
testpool /dev/vdc 5 GiB In-use Data
testpool /dev/vdd 5 GiB In-use Cache
Growing the storage pool
Suppose the testfs filesystem is close to using all the storage capacity of testpool. You could add an additional disk/block device to the pool with commands similar to the following:
# stratis pool add-data testpool /dev/vde
# stratis blockdev
Pool Name Device Node Physical Size State Tier
testpool /dev/vdb 5 GiB In-use Data
testpool /dev/vdc 5 GiB In-use Data
testpool /dev/vdd 5 GiB In-use Cache
testpool /dev/vde 5 GiB In-use Data
After adding the device, verify that the pool shows the added capacity:
# stratis pool
Name Total Physical Size Total Physical Used
testpool 15 GiB 606 MiB
Conclusion
Stratis is a tool designed to make managing storage much simpler. Creating a filesystem with enterprise functionalities like thin-provisioning, snapshots, volume management, and caching can be accomplished quickly and easily with just a few basic commands.
Sound Particles is a one of a kind program for rendering 3D audio, capable of supporting thousands to millions of sounds in your simulation. Sound Particles has been battle tested used in big budgets movies such as Alita, Ready Player One and the new Star Wars films, as well as games such as Assassins Creed Origin.
Sound Particles is a sound design software application capable of generating thousands (even millions) of sounds in a virtual 3D audio world. This immersive audio application will enable you to create highly complex sounds on the fly, which will ultimately enable you to design sound better and faster than ever.
Sound Design The best 3D software for complex sound design. Used for film, videogames and virtual reality.
Postproduction Working in postproduction? Use Sound Particles to add depth and richness to your sounds.
Immersive Audio Supports immersive audio formats, such as Ambisonics, Dolby Atmos, Auro 3d and much more.
If you are interested in trying this unique audio application they have a fully functional demonstration available for Windows and Mac available here. All licenses are currently 50% off during Black Friday/Cyber Monday from indie to enterprise licenses. You can see Sound Particles in action in the video below. Sounds used during this demo were all downloaded from the excellent FreeSound.org website.
Every year here on GameFromScratch we gather all of the relevant game development related Black Friday and Cyber Monday deals and 2020 is no exception. What follows is a list of the best deals we have found for game developers, be it programmers, musicians or artists. It will be updated as additional deals are discovered so be sure to check back. Links below may contain an affiliate code that pays GFS a small commission if you make a purchase.
A collection of 700+ Assets 50% off as well as an asset of the day 70% off. Additionally Unity Pro and Unity Enterprise licenses come with a free gift up to $1600 value.
This one runs until the end of the year, get audio plugins and premium memberships at a 25% discount, with higher discounts the more items you purchase.
Lots of gear, computers etc on sale, free shipping.
Amazon
You Know… It’s Amazon
You can learn more about the above sales in the video below. Leave a comment on the video or the GFS discord if you spot another deal you want to share with your fellow game developers and I will add it to the list.