Eight months after the release of Raylib 3.0, Raylib 3.5 was just released. Raylib is an open source cross platform C/C++ game framework. Raylib runs on a ton of different platforms and has bindings available for more than 50 different programming languages. The Raylib 3.5 release brings the following new features.
NEW Platform supported: Raspberry Pi 4 native mode (no X11 windows) through DRM subsystem and GBM API. Actually this is a really interesting improvement because it opens the door to raylib to support other embedded platforms (Odroid, GameShell, NanoPi…). Also worth mentioning the un-official homebrew ports of raylib for PS4 and PSVita.
NEW configuration options exposed: For custom raylib builds, config.h now exposes more than 150 flags and defines to build raylib with only the desired features, for example, it allows to build a minimal raylib library in just some KB removing all external data filetypes supported, very useful to generate small executables or embedded devices.
NEW automatic GIF recording feature: Actually, automatic GIF recording (CTRL+F12) for any raylib application has been available for some versions but this feature was really slow and low-performant using an old gif library with many file-accesses. It has been replaced by a high-performant alternative (msf_gif.h) that operates directly on memory… and actually works very well! Try it out!
NEW RenderBatch system: rlgl module has been redesigned to support custom render batches to allow grouping draw calls as desired, previous implementation just had one default render batch. This feature has not been exposed to raylib API yet but it can be used by advance users dealing with rlgl directly. For example, multiple RenderBatch can be created for 2D sprites and 3D geometry independently.
NEW Framebuffer system: rlgl module now exposes an API for custom Framebuffer attachments (including cubemaps!). raylib RenderTexture is a basic use-case, just allowing color and depth textures, but this new API allows the creation of more advance Framebuffers with multiple attachments, like the G-Buffers. GenTexture*() functions have been redesigned to use this new API.
Improved software rendering: raylib Image*() API is intended for software rendering, for those cases when no GPU or no Window is available. Those functions operate directly with multi-format pixel data on RAM and they have been completely redesigned to be way faster, specially for small resolutions and retro-gaming. Low-end embedded devices like microcontrollers with custom displays could benefit of this raylib functionality!
File loading from memory: Multiple functions have been redesigned to load data from memory buffers instead of directly accessing the files, now all raylib file loading/saving goes through a couple of functions that load data into memory. This feature allows custom virtual-file-systems and it gives more control to the user to access data already loaded in memory (i.e. images, fonts, sounds…).
NEW Window states management system: raylib core module has been redesigned to support Window state check and setup more easily and also before/after Window initialization, SetConfigFlags() has been reviewed and SetWindowState() has been added to control Window minification, maximization, hidding, focusing, topmost and more.
NEW GitHub Actions CI/CD system: Previous CI implementation has been reviewed and improved a lot to support multiple build configurations (platforms, compilers, static/shared build) and also an automatic deploy system has been implemented to automatically attach the diferent generated artifacts to every new release. As the system seems to work very good, previous CI platforms (AppVeyor/TravisCI) have been removed.
Release notes are available here and a complete change log is available here. Binary versions of Raylib are available on Raylib.com while the source code is hosted under the ZLib license on GitHub. If you are interested in learning Raylib you can check out their community on Discord. You can also download Raylib via vcpkg on Visual Studio with step by step instructions available here. You can learn more about Raylib and the 3.5 release in the video below.
Today we are going hands on with two powerful Godot plugins, Waterways and Heightmap Terrain for Godot. Both are open source add-ons that work in Godot 3.2.x and both are hosted on GitHub. In the video below we showcase using easy add-on and show how they work well together.
Formally known as WaterGenGodot on GitHub, Waterways enables you to quickly create rivers using spline controls. You have full control over the path the river follows, the look of the water and even have fine tuned control over the foam generated by collisions with other objects in the scene.
This add-on adds terrain creation tools to Godot. Either import and existing heightmap or create your own from scratch. You get full sculpting tools for raising and lower terrain, simulating erosion, etc. You also get tools for painting the texture layer on your newly created terrain. You also get the ability to export as a mesh or heightmap for use in other applications or engines.
Getting Started Tutorial
Installing the plugins is a straight forward exercise. Clone each project from GitHub to a directory of choice. You can get the git url on GitHub here:
Assuming you have a git client installed, from a command line run the command git clone then the copied url. For example:
Now in your Godot project (or create one if you dont have one already), create a folder called addons then copy the addons directory from the two just cloned projects. In your project you now simply need to enable each addon. In Godot go to Project->Project Settings menu. Now switch to the Plugins tab and make sure both are enabled:
Now you’re ready to go! Be sure to check the video below to see both Water Ways & Heightmap Terrain for Godot add-ons in action.
Fedora CoreOS is a lightweight, secure operating system optimized for running containerized workloads. A YAML document is all you need to describe the workload you’d like to run on a Fedora CoreOS server.
This is wonderful for a single server, but how would you describe a fleet of cooperating Fedora CoreOS servers? For example, what if you wanted a set of servers running load balancers, others running a database cluster and others running a web application? How can you get them all configured and provisioned? How can you configure them to communicate with each other? This article looks at how Terraform solves this problem.
Getting started
Before you start, decide whether you need to review the basics of Fedora CoreOS. Check out this previous article on the Fedora Magazine:
Terraform is an open source tool for defining and provisioning infrastructure. Terraform defines infrastructure as code in files. It provisions infrastructure by calculating the difference between the desired state in code and observed state and applying changes to remove the difference.
HashiCorp, the company that created and maintains Terraform, offers an RPM repository to install Terraform.
To get yourself familiar with the tools, start with a simple example. You’re going to create a single Fedora CoreOS server in AWS. To follow along, you need to install awscli and have an AWS account. awscli can be installed from the Fedora repositories and configured using the aws configure command
sudo dnf install -y awscli
aws configure
Please note, AWS is a paid service. If executed correctly, participants should expect less than $1 USD in charges, but mistakes may lead to unexpected charges.
Configuring Terraform
In a new directory, create a file named config.yaml. This file will hold the contents of your Fedore CoreOS configuration. The configuration simply adds an SSH key for the core user. Modify theauthorized_ssh_key section to use your own.
Next, create a file main.tf to contain your Terraform specification. Take a look at the contents section by section. It begins with a block to specify the versions of your providers.
Terraform uses providers to control infrastructure. Here it uses the AWS provider to provision EC2 servers, but it can provision any kind of AWS infrastructure. The ct provider from Poseidon Labs stands for config transpiler. This provider will transpile Fedora CoreOS configurations into Ignition configurations. As a result, you do not need to use fcct to transpile your configurations. Now that your provider versions are specified, initialize them.
provider "aws" { region = "us-west-2"
} provider "ct" {}
The AWS region is set to us-west-2 and the ct provider requires no configuration. With the providers configured, you’re ready to define some infrastructure. Use a data source block to read the configuration.
With this data block defined, you can now access the transpiled Ignition output as data.ct_config.config.rendered. To create an EC2 server, use a resource block, and pass the Ignition output as the user_data attribute.
This configuration hard-codes the virtual machine image (AMI) to the latest stable image of Fedora CoreOS in the us-west-2 region at time of writing. If you would like to use a different region or stream, you can discover the correct AMI on the Fedora CoreOS downloads page.
Finally, you’d like to know the public IP address of the server once it’s created. Use an output block to define the outputs to be displayed once Terraform completes its provisioning.
output "instance_ip_addr" { value = aws_instance.server.public_ip
}
Alright! You’re ready to create some infrastructure. To deploy the server simply run:
terraform init # Installs the provider dependencies
terraform apply # Displays the proposed changes and applies them
Oncecompleted, Terraform prints the public IP address of the server, and you can SSH to the server by running ssh core@{public ip here}. Congratulations — you’ve provisioned your first Fedora CoreOS server using Terraform!
Updates and immutability
At this point you can modify the configuration in config.yaml however you like. To deploy your change simply run terraform apply again. Notice that each time you change the configuration, when you run terraform apply it destroys the server and creates a new one. This aligns well with the Fedora CoreOS philosophy: Configuration can only happen once. Want to change that configuration? Create a new server. This can feel pretty alien if you’re accustomed to provisioning your servers once and continuously re-configuring them with tools like Ansible, Puppet or Chef.
The benefit of always creating new servers is that it is significantly easier to test that newly provisioned servers will act as expected. It can be much more difficult to account for all of the possible ways in which updating a system in place may break. Tooling that adheres to this philosophy typically falls under the heading of Immutable Infrastructure. This approach to infrastructure has some of the same benefits seen in functional programming techniques, namely that mutable state is often a source of error.
Using variables
You can use Terraform input variables to parameterize your infrastructure. In the previous example, you might like to parameterize the AWS region or instance type. This would let you deploy several instances of the same configuration with differing parameters. What if you want to parameterize the Fedora CoreOS configuration? Do so using the templatefile function.
As an example, try parameterizing the username of your user. To do this, add a username variable to the main.tf file:
To deploy with username set to jane, run terraform apply -var=”username=jane”. To verify, try to SSH into the server with ssh jane@{public ip address}.
Leveraging the dependency graph
Passing variables from Terraform into Fedora CoreOS configuration is quite useful. But you can go one step further and pass infrastructure data into the server configuration. This is where Terraform and Fedora CoreOS start to really shine.
Terraform creates a dependency graph to model the state of infrastructure and to plan updates. If the output of one resource (e.g the public IP address of a server) is passed as the input of another service (e.g the destination in a firewall rule), Terraform understands that changes in the former require recreating or modifying the later. If you pass infrastructure data into a Fedora CoreOS configuration, it will participate in the dependency graph. Updates to the inputs will trigger creation of a new server with the new configuration.
Consider a system of one load balancer and three web servers as an example.
The goal is to configure the load balancer with the IP address of each web server so that it can forward traffic to them.
Web server configuration
First, create a file web.yaml and add a simple Nginx configuration with a templated message.
Notice the use of count = 3 and the count.index variable. You can use count to make many copies of a resource. Here, it creates three configurations and three web servers. The count.index variable is used to pass the first configuration to the first web server and so on.
Load balancer configuration
The load balancer will be a basic HAProxy load balancer that forwards to each server. Place the configuration in a file named lb.yaml:
The template expects a map with server names as keys and IP addresses as values. You can create that using the zipmap function. Use the ID of the web servers as keys and the public IP addresses as values.
Finally, add an output block to display the IP address of the load balancer.
output "load_balancer_ip" { value = aws_instance.lb.public_ip
}
All right! Run terraform apply and the IP address of the load balancer displays on completion. You should be able to make requests to the load balancer and get responses from each web server.
$ export LB={{load balancer IP here}}
$ curl $LB
<html> <h1>Hello from Server 0</h1>
</html>
$ curl $LB
<html> <h1>Hello from Server 1</h1>
</html>
$ curl $LB
<html> <h1>Hello from Server 2</h1>
</html>
Now you can modify the configuration of the web servers or load balancer. Any changes can be realized by running terraform apply once again. Note in particular that any change to the web server IP addresses will cause Terraform to recreate the load balancer (changing the count from 3 to 4 is a simple test). Hopefully this emphasizes that the load balancer configuration is indeed a part of the Terraform dependency graph.
Clean up
You can destroy all the infrastructure using the terraform destroy command. Simply navigate to the folder where you created main.tf and run terraform destroy.
Where next?
Code for this tutorial can be found at this GitHub repository. Feel free to play with examples and contribute more if you find something you’d love to share with the world. To learn more about all the amazing things Fedora CoreOS can do, dive into the docs or come chat with the community. To learn more about Terraform, you can rummage through the docs, checkout #terraform on freenode, or contribute on GitHub.
The most recent releases of Unreal Engine now include new beta Unreal Modeling Tools Editor Mode enabling you to create, sculpt and even texture entirely in Unreal Engine. If you want to check out the new features, you need to enable the plugin. Don’t worry, there are step by step instructions available below
In Unreal Engine, select Edit->Plugins.
In the Plugins dialog, filter by Model and locate and select Modeling Tools Editor Mode and click the Enabled checkbox.
This will first prompt you if you want to continue due to it being an experimental feature. Allow this, then it will prompt you to restart Unreal Engine, click Restart Now.
Once your project has restarted, you can access the new modeling tools in the Modes menu by selecting Modeling.
Once enabled a new toolbar will be available with options for creating new geometry from primitives or other creation modes, tools for modifying and deforming as well as sculpting geometry and much more.
Go hands-on with the Unreal Modeling Tools Editor Mode plugin in the video below.
COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open-source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the COPR User Documentation for how to get started.
Blanket
Blanket is an application for playing background sounds, which may potentially improve your focus and increase your productivity. Alternatively, it may help you relax and fall asleep in a noisy environment. No matter what time it is or where you are, Blanket allows you to wake up while birds are chirping, work surrounded by friendly coffee shop chatter or distant city traffic, and then sleep like a log next to a fireplace while it is raining outside. Other popular choices for background sounds such as pink and white noise are also available.
Installation instructions
The repo currently provides Blanket for Fedora 32 and 33. To install it, use these commands:
k9s is a command-line tool for managing Kubernetes clusters. It allows you to list and interact with running pods, read their logs, dig through used resources, and overall make the Kubernetes life easier. With its extensibility through plugins and customizable UI, k9s is welcoming to power-users.
The repo currently provides k9s for Fedora 32, 33, and Fedora Rawhide as well as EPEL 7, 8, Centos Stream, and others. To install it, use these commands:
rhbzquery is a simple tool for querying the Fedora Bugzilla instance. It provides an interface for specifying the search query but it doesn’t list results in the command-line. Instead, rhbzquery generates a Bugzilla URL and opens it in a web browser.
Installation instructions
The repo currently provides rhbzquery for Fedora 32, 33, and Fedora Rawhide. To install it, use these commands:
gping is a more visually intriguing alternative to the standard ping command, as it shows results in a graph. It is also possible to ping multiple hosts at the same time to easily compare their response times.
Installation instructions
The repo currently provides gping for Fedora 32, 33, and Fedora Rawhide as well as for EPEL 7 and 8. To install it, use these commands:
The Flax Engine game engine has just seen it’s 1.0 release. We’ve had our eyes on this engine since it’s first public beta in 2018, which was then followed by a few years of radio silence. The in July of 2020 we got the 0.7 release which added several new features including C++ live scripting support. With today’s release the Flax Engine is now available to everyone.
Key features include:
Seamless C# and C++ scripting
Automatic draw calls batching and instancing
Every asset is using async content streaming by default
Cross-platform support (Windows, Linux, Android, PS4. Xbox One, Xbox Series X/S, UWP…)
GPU Lightmaps Baking
Visual Scripting
VFX tools
Nested prefabs
Gameplay Globals for technical artists
Open World Tools (terrain, foliage, fog, levels streaming)
Hot-reloading C#/C++ in Editor
Full source-code available
Direct communication and help from engine devs
Lightweight development (full repo clone + compilation in less than 3 min)
Flax is available for Windows and Linux developers with the source code available on GitHub. Flax is a commercial game engine, but under fairly liberal terms. Commercial license terms are:
Use Flax for free, pay 4% when you release (above first $25k per quarter). Flax Engine and all related tools, all features, all supported platforms, all source code, all complete projects and Flax Samples with regular updates can be used for free.
If you want to learn more about Flax Engine, be sure to check out the following links:
You can learn more about the game engine and see it in action in the video below. Stay tuned for a more in-depth technical video on Flax Engine in the future.
This article shows the reader how easy it is to get started using pods with Podman on Fedora. But what is Podman? Well, we will start by saying that Podman is a container engine developed by Red Hat, and yes, if you thought about Docker when reading container engine, you are on the right track. A whole new revolution of containerization started with Docker, and Kubernetes added the concept of pods in the area of container orchestration when dealing with containers that share some common resources. But hold on! Do you really think it is worth sticking with Docker alone by assuming it’s the only effective way of containerization? Podman can also manage pods on Fedora as well as the containers used in those pods.
Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images.
From the official Podman documentation at http://docs.podman.io/en/latest/
Why should we switch to Podman?
Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Podman directly interacts with an image registry, containers and image storage.
Install Podman:
sudo dnf -y install podman
Creating a Pod:
To start using the pod we first need to create it and for that we have a basic command structure
$ podman pod create
The command above contains no arguments and hence it will create a pod with a randomly generated name. You might however, want to give your pod a relevant name. For that you just need to modify the above command a bit.
$ podman pod create --name climoiselle
The pod will be created and will report back to you the ID of the pod. In the example shown the pod was given the name ‘climoiselle’. To view the newly created pod is easy by using the command shown below:
$ podman pod list
Newly created pods have been deployed
As you can see, there are two pods listed here, one named darshna and the one created from the example named climoiselle. No doubt you notice that both pods already include one container, yet we sisn’t deploy a container to the pods yet.
What is that extra container inside the pod? This randomly generated container is an infra container. Every podman pod includes this infra container and in practice these containers do nothing but go to sleep. Their purpose is to hold the namespaces associated with the pod and to allow Podman to connect other containers to the pod. The other purpose of the infra container is to allow the pod to keep running when all associated containers have been stopped.
You can also view the individual containers within a pod with the command:
$ podman ps -a --pod
Add a container
The cool thing is, you can add more containers to your newly deployed pod. Always remember the name of your pod. It’s important as you’ll need that name in order to deploy the container in that pod. We’ll use the official ubuntu image and deploy a container using it running the top command.
$ podman run -dt --pod climoiselle ubuntu top
Everything in a Single Command:
Podman has an agile characteristic when it comes to deploying a container in a pod which you created. You can create a pod and deploy a container to the said pod with a single command using Podman. Let’s say you want to deploy an NGINX container, exposing external port 8080 to internal port 80 to a new pod named test_server.
$ podman run -dt --pod new:test_server -p 8080:80 nginx
Created a new pod and deployed a container together
Let’s check all pods that have been created and the number of containers running in each of them …
$ podman pod list
List of the containers, their state and number of containers running into them
Do you want to know a detailed configuration of the pods which are running? Just type in the command shown below:
podman pod inspect [pod's name/id]
Make it stop!
To stop the pods, we need to use the name or ID of the pod. With the information from podman’s pod list command, we can view the pods and their infra id. Simply use podman with the command stop and give the particular name/infra id of the pod.
$ podman pod stop climoiselle
Hey take a look!
My pod climoiselle stopped
After following this short tutorial, you can see how quickly you can use pods with podman on fedora. It’s an easy and convenient way to use containers that share resources and interact together.
Following quickly on the heels of last months Facebook Sponsorship of Blender, today it was announced Blender has a new corporate sponsor, Amazon AWS. This sponsorship is a bit different, in that it is aimed at improving a very specific aspect of Blender, character animation. The sponsorship will enable Blender to hire multiple developers to work over a period of 3 years on improving character animation tools in Blender.
Today Blender Foundation announced that AWS has joined the Blender Development Fund as a Patron Member to support continued core development and innovation for Blender. AWS committed to a period of three years, specifically to support character animation tools development.
“We’re excited to continue to expand our support for open source solutions for our customers in the digital content creation space.” said Kyle Roche, GM of Creative Tools. “The Blender Foundation has been an industry leader in providing production-grade open source software solutions, and we look forward to helping our mutual customers work more efficiently than ever before through continued improvements in Blender.”
Two years ago, Blender Foundation started a project to redesign and upgrade Blender’s character animation system for the coming decade. Nicknamed “Animation 2020” it has a number of specs that were reviewed by character animator and industry veteran Jason Schleifer, now Creative Director at AWS.
“It has always been my preference to work closely with industry talents on improving Blender,” said Blender chairman Ton Roosendaal. “Thanks to AWS’ support we can recruit additional top developers to help us bring character animation in Blender to new heights.”
Blender Foundation will start recruiting in the course of Q1 2021, pending the current travel and meeting restrictions being lifted or relaxed.
Amazon AWS, or Amazon Web Services is the massive cloud services portion of Amazon, which provides a great deal of the backbone of the modern internet. They also have gaming divisions including owning Twitch, as well as the recently updated Lumberyard game engine. Blender is an open source cross platform 3D application that supports modelling, animation, texturing, rendering, sculpting and more. Improvements to the character animation tools will be a welcome addition. Blender development is supported by the Blender Development Fund.
You can learn more about the Amazon AWS sponsorship in the video below.
Unity Technologies and Snap Inc (parent company of SnapChat) have announced a new partnership. The partnership is a two-fold endeavour, advertising and technology integration.
Starting today, Unity Ads, which reaches a highly engaged mobile gaming audience on both Android and iOS, is now included in the Snap Audience Network (SAN). Snapchat’s SAN advertiser campaigns will now include video inventory from Unity’s extensive network of mobile gaming titles, helping advertisers extend beyond Snapchat. Unity Ads1 reports 22.9B+ monthly global ad impressions, reaching 2B+ monthly active end-users worldwide. In 2020, mobile ad viewers have converted at higher frequencies, with install conversion rates up by 23%2, and mobile gamers installing 84% more apps3.
“Snapchat is all about staying connected with your closest friends, but friendships aren’t just about conversations. They are often also based on shared experiences, which today includes gaming,” said Ben Schwerin, VP of Partnership, Snap Inc. “As gaming has increasingly become a visible part of the Snapchatter journey, it’s also an area that we aim to make easier for retailers and brands to reach Unity’s action-orientated gaming community through their Snapchat campaigns.”
From the same article, we get details on the new Unity integration of Snap technologies:
Available today in the Unity Asset Store, mobile game developers can also now leverage select features of Snap Kit to enhance gameplay and the game discovery experience:
– Snap Kit’s Login Kit allows gamers to use their Snapchat account as a quick way to sign up and log in to games.
– Snap Kit’s Creative Kit extends the experience by allowing users to share their gameplay, decorating still shots or 15-second videos with branded stickers, or attaching an AR lens that has been created with game branding to share with their Snapchat friends. The shares also include referral links back to the game, amplifying discovery and user acquisition for Unity developers to Snapchat’s 249 million daily active users4.
A Bitmoji integration will also be coming in early 2021, and will add a new level of personalization to gameplay. With Bitmoji for Game’s Unity SDK, developers will be able to leverage 3D Bitmoji to create a more immersive experience in games made with Unity. They will be able to bring players’ Bitmoji avatars into the center of gameplay, enabling players to be themselves in games like never before.
The SnapKit integration is already live on the Asset Store. For game developers utilizing Unity Ads, this should be an immediate win as a large group of advertisers are now going to be able to target their ad units, hopefully leading to an increase in revenue. You can learn more about the Unity & Snap partnership in the video below.
The open source cross platform HTML5 game framework just got a major update, Phaser 3.50. We’ve long been a fan of Phaser, going back to Phaser 2 with our Complete Phaser Game tutorial as well as our Phaser 3 tutorial video. The Phaser 3.50 release is called the single biggest Phaser release yet.
After 13 beta releases, over 200 resolved issues, thousands of lines of new code and the culmination of over 6 months incredibly hard work, Phaser 3.50 is finally here.
It’s not hyperbole or exaggeration when I say that Phaser 3.50 is the single biggest point release ever in the history of Phaser. There are quite literally hundreds of new features to explore, updates to key areas and of course bug fixes. I did actually try counting all the changes, but gave up after I’d reached 900 of them! Thankfully, they are, as always, meticulously detailed in the Change Log. The changes for 3.50 actually grew so large that I had to split them out from the main Change Log and put them into their own file.
However, don’t let this overwhelm you. A massive number of the changes are purely internal and while there are absolutely some API breaking changes in this release (hence the large version number jump), we’ve kept them as sensible as possible. We already know of lots of devs who have upgraded with minimal, or no, changes to their actual game code. We cannot guarantee that for everyone, of course, but depending on how complex your game is, the chances are good.
This release comes with several new examples and all of the existing examples have been audited to guarantee they are compatible with version 3.50. Major new features from Phaser 3.50 include an improved post processing effect pipeline, 3D Mesh game objects, multi texture support, isometric and hexagonal maps directly from Tiled support, Aseprite export support with animations, point lighting game objects and much much more.
Full details of the hundreds of changes in this release can be found in the release notes. Phaser is an open source project under the MIT source license and is available on GitHub. You can learn more about the Phaser 3.50 release in the video below.