Posted on Leave a comment

AI Powered Art Tools

Today we are going to look at two vastly different AI powered art generation tools, Luminar and NVIDIA GauGAN Beta. Luminar is a powerful art processing tool, a cross between Photoshop and Lightroom, but AI assisted. We recently covered Luminar hands-on including how to use it with Affinity Photo here and it is currently (2021-01-05) in the final 24 hours of a Humble Bundle sale.

NVIDIA GauGAN on the other-hand is a web based application that is part of the NVIDIA AI Playground. GauGAN is described as:

GauGAN, named after post-Impressionist painter Paul Gauguin, creates photorealistic images from segmentation maps, which are labeled sketches that depict the layout of a scene.

Artists can use paintbrush and paint bucket tools to design their own landscapes with labels like river, rock and cloud. A style transfer algorithm allows creators to apply filters — changing a daytime scene to sunset, or a photorealistic image to a painting. Users can even upload their own filters to layer onto their masterpieces, or upload custom segmentation maps and landscape images as a foundation for their artwork

You can learn more about GauGAN here, while technical details of the algorithm in the open source implementation hosted on GitHub. You can try an already trained version of the algorithm in action here in your browser.

These are not unique examples in terms of machine learning or AI enhanced art creation tools. In early 2020 Unity acquired Artomatix, the creator of ArtEngine, an AI driven material creation tool. Another project that was recently featured on this site is Cascadeur, a physics based animation tool that uses machine learning to help with animations. DeepMotion Animate 3D is another recently featured machine learning based application, that takes simple 2D footage and makes a 3D rig and animation from the results.

The key thing in all of these tools thus far is they don’t seek to replace the artist, but augment them using deep or machine learning algorithms. You can check out NVIDIA GauGAN and Luminar in action in the video below. For a VERY limited time you can get Luminar on Humble Bundle here. [Expires 01/06! — GFS can receive a commission on Humble purchases]. So what do you say, are AI powered art tools the future?

[youtube https://www.youtube.com/watch?v=SIUdUPjeatg?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Scatter for Godot

Today we are looking at Scatter for the Godot Game Engine. Scatter is a Godot add-on that makes it incredibly easy to instance mesh objects in your game level. This makes level design tasks like placing grass, paths, fences, etc incredibly simple. Additionally Scatter supports instancing multiple meshes (think different tree meshes to make a forest) in the same scatter, excluding splines or points from being scatter targets and more.

Scatter is an open source project with the source code hosted on GitHub under the MIT open source license. The project is implemented as a simple Godot add-on, so simply clone the repository into your projects Addons folder (or create one if you don’t have one already). Next load your project, go to Project Settings, then Plugins and make sure Scatter is enabled.

Once scattered is enabled, you create a Scatter object. This is a spline path that defines the boundaries of the scatter object. You need to add a ScatterItem child to your Scatter, then add a MeshInstance to the ScatterItem. This mesh instance is the 3D model that will be “scattered” around the boundary defined by the Scatter path.

The creator of the Scatter add-on also created Concept Graph for Godot, an excellent procedural generation extension we previously covered here. You can learn more about Scatter for Godot and see it in action in the video below.

[youtube https://www.youtube.com/watch?v=MB3Vz6JFAOA?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Network address translation part 1 – packet tracing

The first post in a series about network address translation (NAT). Part 1 shows how to use the iptables/nftables packet tracing feature to find the source of NAT related connectivity problems.

Introduction

Network address translation is one way to expose containers or virtual machines to the wider internet. Incoming connection requests have their destination address rewritten to a different one. Packets are then routed to a container or virtual machine instead. The same technique can be used for load-balancing where incoming connections get distributed among a pool of machines.

Connection requests fail when network address translation is not working as expected. The wrong service is exposed, connections end up in the wrong container, request time out, and so on. One way to debug such problems is to check that the incoming request matches the expected or configured translation.

Connection tracking

NAT involves more than just changing the ip addresses or port numbers. For instance, when mapping address X to Y, there is no need to add a rule to do the reverse translation. A netfilter system called “conntrack” recognizes packets that are replies to an existing connection. Each connection has its own NAT state attached to it. Reverse translation is done automatically.

Ruleset evaluation tracing

The utility nftables (and, to a lesser extent, iptables) allow for examining how a packet is evaluated and which rules in the ruleset were matched by it. To use this special feature “trace rules” are inserted at a suitable location. These rules select the packet(s) that should be traced. Lets assume that a host coming from IP address C is trying to reach the service on address S and port P. We want to know which NAT transformation is picked up, which rules get checked and if the packet gets dropped somewhere.

Because we are dealing with incoming connections, add a rule to the prerouting hook point. Prerouting means that the kernel has not yet made a decision on where the packet will be sent to. A change to the destination address often results in packets to get forwarded rather than being handled by the host itself.

Initial setup

 
# nft 'add table inet trace_debug'
# nft 'add chain inet trace_debug trace_pre { type filter hook prerouting priority -200000; }'
# nft "insert rule inet trace_debug trace_pre ip saddr $C ip daddr $S tcp dport $P tcp flags syn limit rate 1/second meta nftrace set 1"

The first rule adds a new table This allows easier removal of the trace and debug rules later. A single “nft delete table inet trace_debug” will be enough to undo all rules and chains added to the temporary table during debugging.

The second rule creates a base hook before routing decisions have been made (prerouting) and with a negative priority value to make sure it will be evaluated before connection tracking and the NAT rules.

The only important part, however, is the last fragment of the third rule: “meta nftrace set 1″. This enables tracing events for all packets that match the rule. Be as specific as possible to get a good signal-to-noise ratio. Consider adding a rate limit to keep the number of trace events at a manageable level. A limit of one packet per second or per minute is a good choice. The provided example traces all syn and syn/ack packets coming from host $C and going to destination port $P on the destination host $S. The limit clause prevents event flooding. In most cases a trace of a single packet is enough.

The procedure is similar for iptables users. An equivalent trace rule looks like this:

 
# iptables -t raw -I PREROUTING -s $C -d $S -p tcp --tcp-flags SYN SYN  --dport $P  -m limit --limit 1/s -j TRACE

Obtaining trace events

Users of the native nft tool can just run the nft trace mode:

 
# nft monitor trace

This prints out the received packet and all rules that match the packet (use CTRL-C to stop it):

 
trace id f0f627 ip raw prerouting  packet: iif "veth0" ether saddr ..

We will examine this in more detail in the next section. If you use iptables, first check the installed version via the “iptables –version” command. Example:

 
# iptables --version
iptables v1.8.5 (legacy)

(legacy) means that trace events are logged to the kernel ring buffer. You will need to check dmesg or journalctl. The debug output lacks some information but is conceptually similar to the one provided by the new tools. You will need to check the rule line numbers that are logged and correlate those to the active iptables ruleset yourself. If the output shows (nf_tables), you can use the xtables-monitor tool:

 
# xtables-monitor --trace

If the command only shows the version, you will also need to look at dmesg/journalctl instead. xtables-monitor uses the same kernel interface as the nft monitor trace tool. Their only difference is that it will print events in iptables syntax and that, if you use a mix of both iptables-nft and nft, it will be unable to print rules that use maps/sets and other nftables-only features.

Example

Lets assume you’d like to debug a non-working port forward to a virtual machine or container. The command “ssh -p 1222 10.1.2.3” should provide remote access to a container running on the machine with that address, but the connection attempt times out.

You have access to the host running the container image. Log in and add a trace rule. See the earlier example on how to add a temporary debug table. The trace rule looks like this:

 
nft "insert rule inet trace_debug trace_pre ip daddr 10.1.2.3 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1"

After the rule has been added, start nft in trace mode: nft monitor trace, then retry the failed ssh command. This will generate a lot of output if the ruleset is large. Do not worry about the large example output below – the next section will do a line-by-line walkthrough.

 
trace id 9c01f8 inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)
trace id 9c01f8 inet trace_debug trace_pre verdict continue
trace id 9c01f8 inet trace_debug trace_pre policy accept
trace id 9c01f8 inet nat prerouting packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp  tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)
trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200
trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)
trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn

Line-by-line trace walkthrough

The first line generated is the packet id that triggered the subsequent trace output. Even though this is in the same grammar as the nft rule syntax, it contains header fields of the packet that was just received. You will find the name of the receiving network interface (here named “enp0”) the source and destination mac addresses of the packet, the source ip address (can be important – maybe the reporter is connecting from a wrong/unexpected host) and the tcp source and destination ports. You will also see a “trace id” at the very beginning. This identification tells which incoming packet matched a rule. The second line contains the first rule matched by the packet:

 
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)

This is the just-added trace rule. The first rule is always one that activates packet tracing. If there would be other rules before this, we would not see them. If there is no trace output at all, the trace rule itself is never reached or does not match. The next two lines tell that there are no further rules and that the “trace_pre” hook allows the packet to continue (verdict accept).

The next matching rule is

 
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)

This rule sets up a mapping to a different address and port. Provided 192.168.70.10 really is the address of the desired VM, there is no problem so far. If its not the correct VM address, the address was either mistyped or the wrong NAT rule was matched.

IP forwarding

Next we can see that the IP routing engine told the IP stack that the packet needs to be forwarded to another host:

trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200

This is another dump of the packet that was received, but there are a couple of interesting changes. There is now an output interface set. This did not exist previously because the previous rules are located before the routing decision (the prerouting hook). The id is the same as before, so this is still the same packet, but the address and port has already been altered. In case there are rules that match “tcp dport 1222” they will have no effect anymore on this packet.

If the line contains no output interface (oif), the routing decision steered the packet to the local host. Route debugging is a different topic and not covered here.

trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)

This tells that the packet matched a rule that jumps to a chain named “allowed_dnats”. The next line shows the source of the connection failure:

 
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)

The rule unconditionally drops the packet, so no further log output for the packet exists. The next output line is the result of a different packet:

trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn

The trace id is different, the packet however has the same content. This is a retransmit attempt: The first packet was dropped, so TCP re-tries. Ignore the remaining output, it does not contain new information. Time to inspect that chain.

Ruleset investigation

The previous section found that the packet is dropped in a chain named “allowed_dnats” in the inet filter table. Time to look at it:

 
# nft list chain inet filter allowed_dnats
table inet filter {
 chain allowed_dnats {
  meta nfproto ipv4 ip daddr . tcp dport @allow_in accept
  drop
   }
}

The rule that accepts packets in the @allow_in set did not show up in the trace log. Double-check that the address is in the @allow_set by listing the element:

 
# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
Error: Could not process rule: No such file or directory

As expected, the address-service pair is not in the set. We add it now.

 
# nft "add element inet filter allow_in { 192.168.70.10 . 22 }"

Run the query command now, it will return the newly added element.

# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
table inet filter { set allow_in { type ipv4_addr . inet_service elements = { 192.168.70.10 . 22 } }
}

The ssh command should now work and the trace output reflects the change:

trace id 497abf58 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 497abf58 inet filter allowed_dnats rule meta nfproto ipv4 ip daddr . tcp dport @allow_in accept (verdict accept)
trace id 497abf58 ip postrouting packet: iif "enp0" oif "veth21" ether .. trace id 497abf58 ip postrouting policy accept

This shows the packet passes the last hook in the forwarding path – postrouting.

In case the connect is still not working, the problem is somewhere later in the packet pipeline and outside of the nftables ruleset.

Summary

This Article gave an introduction on how to check for packet drops and other sources of connectivity problems with the nftables trace mechanism. A later post in the series shows how to inspect the connection tracking subsystem and the NAT information that may be attached to tracked flows.

Posted on Leave a comment

Blender in 2021

The Blender Foundation have just announced their “Big Projects” list for Blender in 2021. It is hard to argue that 2020 wasn’t a banner year for Blender development, with three major releases as well as the first ever LTS release. Through 2020 we saw improvements to the Blender UI/UX, sculpting tools, modeling, EEVEE, Cycles and so much more. We also saw a record number of massive companies coming on board the Blender development fund. With the release today of the projects list, we get insight into the Blender priorities in 2021, including priorities such as:

  • launch of a new open movie called Sprite Fight
  • the everything nodes project, where everything in Blender will be able to be driven procedurally using nodes (see Geometry Nodes in action here)
  • all new Asset Browsers editor window for better content management
  • massive improvements to the VSE or Video Sequence Editor
  • EEVEE real-time rendering improvements including Vulkan support, motion blur, depth of field and possibly raytracing
  • VR improvements including the ability to use VR controllers and author content in virtual reality
  • Cycles rendering improvements especially related to perfromance
  • Animation 22 (previously Animation 2020), an effort to improve animation tools in Blender, sponsored by AWS
  • improved pipeline and USD support, Pixar’s open interchange format

You can learn more about the Blender’s accomplishments in 2020, as well as the new projects in 2021 in the video below.

[youtube https://www.youtube.com/watch?v=mU387jw8UNU?feature=oembed&w=1500&h=844]
Posted on Leave a comment

A 2020 love letter to the Fedora community

[This message comes directly from the desk of Matthew Miller, the Fedora Project Leader. — Ed.]

When I wrote about COVID-19 and the Fedora community all the way back on March 16, it was very unclear how 2020 was going to turn out. I hoped that we’d have everything under control and return to normal soon—we didn’t take our Flock to Fedora in-person conference off the table for another month. Back then, I naively hoped that this would be a short event and that life would return to normal soon. But of course, things got worse, and we had to reimagine Flock as a virtual event on short notice. We weren’t even sure if we’d be able to make our regular Fedora Linux releases on schedule.

Even without the pandemic, 2020 was already destined to be an interesting year. Because Red Hat moved the datacenter where most of Fedora’s servers live, our infrastructure team had to move our servers across the continent. Fedora 33 had the largest planned change set of any Fedora Linux release—and not small things either. We changed the default filesystem for desktop variants to BTRFS and promoted Fedora IoT to an Edition. We also began Fedora ELN—a new process which does a nightly build of Fedora’s development branch in the same configuration Red Hat would use to compose Red Hat Enterprise Linux. And Fedora’s popularity keeps growing, which means more users to support and more new community members to onboard. It’s great to be successful, but we also need to keep up with ourselves!

So, it was already busy. And then the pandemic came along. In many ways, we’re fortunate: we’re already a global community used to distributed work, and we already use chat-based meetings and video calls to collaborate. But it made the datacenter move more difficult. The closure of Red Hat offices meant that some of the QA hardware was inaccessible. We couldn’t gather together in person like we’re used to doing. And of course, we all worried about the safety of our friends and family. Isolation and disruption just plain make everything harder.

I’m always proud of the Fedora community, but this year, even more so. In a time of great stress and uncertainty, we came together and did our best work. Flock to Fedora became Nest With Fedora. Thanks to the heroic effort of Marie Nordin and many others, it was a resounding success. We had way more attendees than we’ve ever had at an in-person Flock, which made our community more accessible to contributors who can’t always join us. And we followed up with our first-ever virtual release party and an online Fedora Women’s Day, both also resounding successes.

And then, we shipped both Fedora 32 and Fedora 33 on time, extending our streak to six releases—three straight years of hitting our targets.

The work we all did has not gone unnoticed. You already know that Lenovo is shipping Fedora Workstation on select laptop models. I’m happy to share that two of the top Linux podcasts have recognized our work—particularly Fedora 33—in their year-end awards. LINUX Unplugged listeners voted Fedora Linux their favorite Linux desktop distribution. Three out of the four Destination Linux hosts chose Fedora as the best distro of the year, specifically citing the exciting work we’ve done on Fedora 33 and the strength of our community. In addition, OMG! Ubuntu! included Fedora 33 in its “5 best Linux distribution releases of 2020” and TechRepublic called Fedora 33 “absolutely fantastic“.

Like everyone, I’m looking ahead to 2021. The next few months are still going to be hard, but the amazing work on mRNA and other new vaccine technology means we have clear reasons to be optimistic. Through this trying year, the Fedora community is stronger than ever, and we have some great things to carry forward into better times: a Nest-like virtual event to compliment Flock, online release parties, our weekly Fedora Social Hour, and of course the CPE team’s great trivia events.

In 2021, we’ll keep doing the great work to push the state of the art forward. We’ll be bold in bringing new features into Fedora Linux. We’ll try new things even when we’re worried that they might not work, and we’ll learn from failures and try again. And we’ll keep working to make our community and our platform inclusive, welcoming, and accessible to all.

To everyone who has contributed to Fedora in any way, thank you. Packagers, blog writers, doc writers, testers, designers, artists, developers, meeting chairs, sysadmins, Ask Fedora answerers, D&I team, and more—you kicked ass this year and it shows. Stay safe and healthy, and we’ll meet again in person soon.Oh, one more thing! Join us for a Fedora Social Hour New Year’s Eve Special. We’ll meet at 23:30 UTC today in Hopin (the platform we used for Nest and other events). Hope to see you there!

Posted on Leave a comment

Blender Network Being Shutdown

Blender have announced that Blender Network is being shutdown. The Blender Network shutdown is occurring in just a few months according to the post on the Blender press site:

On March 31st 2021, Blender Network will terminate its operations. All ongoing memberships will be cancelled and have their last payment refunded. The Blender Foundation Certified Trainer program (BFCT), which was already on hold, will also stop. The blendernetwork.org domain (and all URLs) will redirect to blender.org. No data will be preserved on the blendernetwork.org server, which will be discontinued.

Originally presented by Ton Roosendaal as a whitepaper in 2010, Blender Network’s mission was to facilitate the provisioning of services and support, connect users and promote professional Blender businesses. This mission has been carried out by the blendernetwork.org platform, providing visibility and business opportunities to several hundred individuals and organizations.

However, the incredible growth of the Blender community and the rise of social media have greatly reduced the need for a Blender-backed platform to provide legitimacy and visibility to professionals. For this reason, after almost a decade of operation, it is time to retire the platform.

Fortunately, out of the ashes rise a new phoenix, with Pablo Vazquez making the following announcement on Twitter.

Followed by the follow Tweet with more details:

Unfortunately there is no successor planned for the Blender Certification Program, that was already put on hold due to a lack of focus. You can learn more about the Blender Network sun-setting in the video below.

[youtube https://www.youtube.com/watch?v=P8Du83qumwc?feature=oembed&w=1500&h=844]
Posted on Leave a comment

PowerIK For Unreal Engine Hands-On

Today we look at PowerIK for Unreal Engine, a full body IK solver. PowerIK was recently released as part of the December monthly UE giveaways as part of the free forever category. On the Unreal Engine marketplace, PowerIK was described as:

Power IK is a full-body IK solver that lets animators push and pull any skeleton with any number of effectors.

Use Power IK to easily align creatures to uneven terrain, or dynamically modify their pose at run-time. Power IK is a robust and efficient solver that produces remarkably natural poses even under extreme circumstances.

PowerIK has the following features:

  • Unique proprietary full-body IK solver
  • Power IK Solver AnimGraph node
  • Built-in ground alignment
  • Power IK Rig Actor Component for making interactive rigs
  • Bonus! Procedural animation example blueprints
  • Bonus! 6 sample skeletal meshes with fully documented blueprints

While it supports Unreal Engine 4.26, the current install will give you an error when you try to run PowerIK. If this occurs, on Windows the fix is fairly simple. Navigate to your install directory for UE 4.26, then navigate to:

\Engine\Plugins\Marketplace\PowerIK\Source\PowerIKRuntime\sdk\lib\Win64

Copy the file POWERIK.DLL. Next paste it to the directory

\Engine\Plugins\Marketplace\PowerIK\Binaries\Win64

Now it should work just fine. In the video below we go hands-on with PowerIK using the example project currently available for download here. If you run into some trouble, the documentation is available here.

[youtube https://www.youtube.com/watch?v=VXL2k2SbBms?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Choose between Btrfs and LVM-ext4

Fedora 33 introduced a new default filesystem in desktop variants, Btrfs. After years of Fedora using ext4 on top of Logical Volume Manager (LVM) volumes, this is a big shift. Changing the default file system requires compelling reasons. While Btrfs is an exciting next-generation file system, ext4 on LVM is well established and stable. This guide aims to explore the high-level features of each and make it easier to choose between Btrfs and LVM-ext4.

In summary

The simplest advice is to stick with the defaults. A fresh Fedora 33 install defaults to Btrfs and upgrading a previous Fedora release continues to use whatever was initially installed, typically LVM-ext4. For an existing Fedora user, the cleanest way to get Btrfs is with a fresh install. However, a fresh install is much more disruptive than a simple upgrade. Unless there is a specific need, this disruption could be unnecessary. The Fedora development team carefully considered both defaults, so be confident with either choice.

What about all the other file systems?

There are a large number of file systems for Linux systems. The number explodes after adding in combinations of volume managers, encryption methods, and storage mechanisms . So why focus on Btrfs and LVM-ext4? For the Fedora audience these two setups are likely to be the most common. Ext4 on top of LVM became the default disk layout in Fedora 11, and ext3 on top of LVM came before that.

Now that Btrfs is the default for Fedora 33, the vast majority of existing users will be looking at whether they should stay where they are or make the jump forward. Faced with a fresh Fedora 33 install, experienced Linux users may wonder whether to use this new file system or fall back to what they are familiar with. So out of the wide field of possible storage options, many Fedora users will wonder how to choose between Btrfs and LVM-ext4.

Commonalities

Despite core differences between the two setups, Btrfs and LVM-ext4 actually have a lot in common. Both are mature and well-tested storage technologies. LVM has been in continuous use since the early days of Fedora Core and ext4 became the default in 2009 with Fedora 11. Btrfs merged into the mainline Linux kernel in 2009 and Facebook uses it widely. SUSE Linux Enterprise 12 made it the default in 2014. So there is plenty of production run time there as well.

Both systems do a great job preventing file system corruption due to unexpected power outages, even though the way they accomplish it is different. Supported configurations include single drive setups as well as spanning multiple devices, and both are capable of creating nearly instant snapshots. A variety of tools exist to help manage either system, both with the command line and graphical interfaces. Either solution works equally well on home desktops and on high-end servers.

Advantages of LVM-ext4

Show the relationship of LVM-ext4 filesystem to hard-drive partitions and mounted directories.
Structure of ext4 on LVM

The ext4 file system focuses on high-performance and scalability, without a lot of extra frills. It is effective at preventing fragmentation over extended periods of time and provides nice tools for when it does happen. Ext4 is rock solid because it built on the previous ext3 file system, bringing with it all the years of in-system testing and bug fixes.

Most of the advanced capabilities in the LVM-ext4 setup come from LVM itself. LVM sits “below” the file system, which means it supports any file system. Logical volumes (LV) are generic block devices so virtual machines can use them directly. This flexibility allows each logical volume to use the right file system, with the right options, for a variety of situations. This layered approach also honors the Unix philosophy of small tools working together.

The volume group (VG) abstraction from the hardware allows LVM to create flexible logical volumes. Each LV pulls from the same storage pool but has its own configuration. Resizing volumes is a lot easier than resizing physical partitions as there are no limitation of ordered placement of the data. LVM physical volumes (PV) can be any number of partitions and can even move between devices while the system is running.

LVM supports read-only and read-write snapshots, which make it easy to create consistent backups from active systems. Each snapshot has a defined size, and a change to the source or snapshot volume use space from there. Alternately, logical volumes can also be part of a thinly provisioned pool. This allows snapshots to automatically use data from a pool instead of consuming fixed sized chunks defined at volume creation.

Multiple devices with LVM

LVM really shines when there are multiple devices. It has native support for most RAID levels and each logical volume can have a different RAID level. LVM will automatically choose appropriate physical devices for the RAID configuration or the user can specify it directly. Basic RAID support includes data striping for performance (RAID0) and mirroring for redundancy (RAID1). Logical volumes can also use advanced setups like RAID5, RAID6, and RAID10. LVM RAID support is mature because under the hood LVM uses the same device-mapper (dm) and multiple-device (md) kernel support used by mdadm.

Logical volumes can also be cached volumes for systems with both fast and slow drives. A classic example is a combination of SSD and spinning-disk drives. Cached volumes use faster drives for more frequently accessed data (or as a write cache), and the slower drive for bulk data.

The large number of stable features in LVM and the reliable performance of ext4 are a testament to how long they have been in use. Of course, with more features comes complexity. It can be challenging to find the right options for the right feature when configuring LVM. For single drive desktop systems, features of LVM like RAID and cache volumes don’t apply. However, logical volumes are more flexible than physical partitions and snapshots are useful. For normal desktop use, the complexity of LVM can also be a barrier to recovering from issues a typical user might encounter.

Advantages of Btrfs

Show the relationship of Btrfs filesystem to hard-drive partitions and mounted directories.
Btrfs Structure

Lessons learned from previous generations guided the features built into Btrfs. Unlike ext4, it can directly span multiple devices, so it brings along features typically found only in volume managers. It also has features that are unique in the Linux file system space (ZFS has a similar feature set, but don’t expect it in the Linux kernel).

Key Btrfs features

Perhaps the most important feature is the checksumming of all data. Checksumming, along with copy-on-write, provides the key method of ensuring file system integrity after unexpected power loss. More uniquely, checksumming can detect errors in the data itself. Silent data corruption, sometimes referred to as bitrot, is more common that most people realize. Without active validation, corruption can end up propagating to all available backups. This leaves the user with no valid copies. By transparently checksumming all data, Btrfs is able to immediately detect any such corruption. Enabling the right dup or raid option allows the file system to transparently fix the corruption as well.

Copy-on-write (COW) is also a fundamental feature of Btrfs, as it is critical in providing file system integrity and instant subvolume snapshots. Snapshots automatically share underlying data when created from common subvolumes. Additionally, after-the-fact deduplication uses the same technology to eliminate identical data blocks. Individual files can use COW features by calling cp with the reflink option. Reflink copies are especially useful for copying large files, such as virtual machine images, that tend to have mostly identical data over time.

Btrfs supports spanning multiple devices with no volume manager required. Multiple device support unlocks data mirroring for redundancy and striping for performance. There is also experimental support for more advanced RAID levels, such as RAID5 and RAID6. Unlike standard RAID setups, the Btrfs raid1 option actually allows an odd number of devices. For example, it can use 3 devices, even if they are are different sizes.

All RAID and dup options are specified at the file system level. As a consequence, individual subvolumes cannot use different options. Note that using the RAID1 option with multiple devices means that all data in the volume is available even if one device fails and the checksum feature maintains the integrity of the data itself. That is beyond what current typical RAID setups can provide.

Additional features

Btrfs also enables quick and easy remote backups. Subvolume snapshots can be sent to a remote system for storage. By leveraging the inherent COW meta-data in the file system, these transfers are efficient by only sending incremental changes from previously sent snapshots. User applications such as snapper make it easy to manage these snapshots.

Additionally, a Btrfs volume can have transparent compression and chattr +c will mark individual files or directories for compression. Not only does compression reduce the space consumed by data, but it helps extend the life of SSDs by reducing the volume of write operations. Compression certainly introduces additional CPU overhead, but a lot of options are available to dial in the right trade-offs.

The integration of file system and volume manager functions by Btrfs means that overall maintenance is simpler than LVM-ext4. Certainly this integration comes with less flexibility, but for most desktop, and even server, setups it is more than sufficient.

Btrfs on LVM

Btrfs can convert an ext3/ext4 file system in place. In-place conversion means no data to copy out and then back in. The data blocks themselves are not even modified. As a result, one option for an existing LVM-ext4 systems is to leave LVM in place and simply convert ext4 over to Btrfs. While doable and supported, there are reasons why this isn’t the best option.

Some of the appeal of Btrfs is the easier management that comes with a file system integrated with a volume manager. By running on top of LVM, there is still some other volume manager in play for any system maintenance. Also, LVM setups typically have multiple fixed sized logical volumes with independent file systems. While Btrfs supports multiple volumes in a given computer, many of the nice features expect a single volume with multiple subvolumes. The user is still stuck manually managing fixed sized LVM volumes if each one has an independent Btrfs volume. Though, the ability to shrink mounted Btrfs filesystems does make working with fixed sized volumes less painful. With online shrink there is no need to boot a live image.

The physical locations of logical volumes must be carefully considered when using the multiple device support of Btrfs. To Btrfs, each LV is a separate physical device and if that is not actually the case, then certain data availability features might make the wrong decision. For example, using raid1 for data typically provides protection if a single drive fails. If the actual logical volumes are on the same physical device, then there is no redundancy.

If there is a strong need for some particular LVM feature, such as raw block devices or cached logical volumes, then running Btrfs on top of LVM makes sense. In this configuration, Btrfs still provides most of its advantages such as checksumming and easy sending of incremental snapshots. While LVM has some operational overhead when used, it is no more so with Btrfs than with any other file system.

Wrap up

When trying to choose between Btrfs and LVM-ext4 there is no single right answer. Each user has unique requirements, and the same user may have different systems with different needs. Take a look at the feature set of each configuration, and decide if there is something compelling about one over the other. If not, there is nothing wrong with sticking with the defaults. There are excellent reasons to choose either setup.

Posted on Leave a comment

Easy Anime Character Creation with VRoid Studio and Blender

Creating anime characters for game development has never been easier with tools like VRoid Studio and Blender. In this tutorial we showcase using VRoid Studio, a free tool for creating textured and animated anime avatars. If VRoid Studio sounds familiar, we featured this tool as recently as 2019.

In the video below we walk through the following processes:

  • Using VRoid Studio
  • Exporting VRM files
  • Importing VRM into Blender
  • Creating a simple animation
  • Exporting from Blender in GLB/GLTF format
  • Importing GLB formats into the Godot game engine
  • Exporting VRoid characters to Mixamo for animting

In addition to VRoid Studio you need the VRM importer for Blender. If you are using the Unity game engine, there is a Unity importer for VRM files available as well, although we wont be covering it in the video below.

One area of importance with any tool, especially free tools, are what the license terms are. You can see the list of appropriate uses here, which specifically includes “Selling video games and other products featuring characters created with VRoid Studio”. Once you have all the appropriate tools, check out the video below for step by step instruction son how to create an animated anime character for use in Godot using VRoid Studio and Blender.

[youtube https://www.youtube.com/watch?v=dAW6ovhENs8?feature=oembed&w=1500&h=844]
Posted on Leave a comment

Contribute at the Fedora Test Week for Kernel 5.10

The kernel team is working on final integration for kernel 5.10. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, January 04, 2021 through Monday, January 11, 2021. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.