Posted on Leave a comment

Report: Fruit Ninja creator Halfbrick cuts workforce by 50 percent

Fruit Ninja creator Halfbrick has laid off 30 employees, which amounts to around half of its entire workforce. 

As reported by CNET, the Australian mobile studio has been struggling of late, and is attempting to regain momentum by restructuring around its most successful titles, Fruit Ninja and Jetpack Joyride

These layoffs are apparently the result of that change in tack, and come around two-and-a-half years after the company made its last two game designers redundant

According to anonymous sources, the company’s main Brisbane office now houses less than 30 staffers, all of whom are focused entirely on maintaining Fruit Ninja and Jetpack Joyride

The developer also has an office in Spain, but it’s unclear how (if at all) that studio has been affected by the cuts. Gamasutra has reached out to Halfbrick for comment.

Posted on Leave a comment

Blog: The impact of random number generation on game design

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.


RNG or Random Number Generation has been the blessing, curse, and insanity-inducing element of many video games. Adding chaos to your game is never easy, and understanding just how powerful it can be is an important lesson for game design.

What is RNG?

RNG is where probability and chance interact with game design. If you ever played a game where there is the chance of something happening, then there is RNG in effect.

A major part of RPG and rogue-like design, the idea that things cannot go exactly as planned is a major point of replayability and challenge. When it works, RNG can keep a game nigh-infinitely replayable, thanks to new challenges and situations for players to get into.

What makes RNG tricky is that its impact varies wildly depending on the design at play. If you put too little in, then it’s just wasted time on something that doesn’t affect the player. However, if there’s too much RNG, then it can feel like the player is not the one in control of winning.

The Scales of Chaos:

RNG is about creating controlled chaos in your game, and every title handles it differently. The one constant that we see in good game design is providing the player with hard-coded elements that are consistent. It’s important for the player to know just what exactly their options are and have an idea of what the possible outcomes are.

Playing XCOM, the player does not know if they’re going to hit with their shots, but they do know the percentage change and possible damage. Without a foundation for the player to understand, it can feel like the game is just too chaotic.

I’ve played titles where the RNG is so polarized that winning or losing can be out of the player’s hands; such as with the game Tharsis.

On the other hand, there have been games that tried to introduce RNG as a way of improving replayability to only have it blow up in their face. If the pool of possible outcomes is too small, then there’s no point in making it random. Such as having a weapon spawn with either six points of damage or seven.

RNG’s impact can go to either extreme, but still requires a careful eye towards balance. In Resident Evil 7, while item spawn locations are fixed to some extent, what the game spawns is random with a slight weight towards what the player is missing. That serves the purpose to guarantee that as long as the player is moving forward, they should be able to restock the essentials.

The more RNG effects success in a game, the harder it can be to have a defined experience for the player. For everyone who has an easy time and gets lucky, there are players who are going to feel the full weight of bad luck when they play your game.

No matter how much RNG is a part of your design, it’s important to understand what it means when it comes to the core gameplay loop.

Random Gameplay:

As the title of this post implies, RNG is like a spice and its impact can vary greatly depending on what it’s impacting.

The less RNG in your game means that there are more set outcomes and situations for the player to deal with. Without any RNG, a game will not challenge players past their first play, as nothing will change.

But with that said, even a little RNG in the right place can have huge ramifications for play. Both FTL and Into the Breach make use of set game systems and rules for how the games are played. RNG in FTL comes into play with what events and items the player gets in a run, and Into the Breach has random enemies spawning during a battle.

[embedded content]

As the player, these situations greatly impact not only how the game plays out, but what their response is to the changing factors. Being able to improvise on the fly is an essential part of playing games with RNG, but there is a limit. You need to be careful when RNG directly impacts the player’s options, as it is possible to create situations where the game can become hard for the sake of difficulty.

With Into the Breach, my options for each turn are fixed, but the enemies that the player encounters are randomly spawned from a fixed pool. One unlucky spawn can lead to a cascading effect of the player not being able to get out of the negative feedback loop.

When the player loses control of the situation without having the option to correct it, that’s when RNG can be seen as going too far.

A Steady Hand:

RNG can have long-reaching impacts in your design when put into the right game systems. If the design can accommodate it, allowing the player to mitigate or manipulate it to their favor can be an essential part of mastering the game.

Too much or too little RNG can ruin the vision for your game. No matter how random or out of control things get, there must be a set foundation for the player to learn from, and a plan to keep the game from going out of control with the generation.

That was one of the understated elements that made Spelunky work so well. No matter the difficulty of the procedurally generated levels, it all fell back on the player’s skill and mastery to carry them through. Not only that, but the levels had the feeling of being handmade even thought they were generated via an algorithm.

But if the RNG is too limited, or acts as window dressing, then the game will still be the same no matter how much things are shaken up with each generation.

To wrap up: Can you think of games where the RNG was either too extreme or too limited, and made the game worse in the process?

Posted on Leave a comment

Q& A: An indie dev’s guide to hiring, casting, and directing voice actors

Voice acting is crucially different from other art assets in game dev, either visual or audio, in that there’s no such thing as “off the shelf” voices.

Every project has its own dialogue, and if a developer wants to add voice to their game, it’s an individualized, bespoke project. If you’re a solo or small-team game creator, voice acting may seem out of your reach—and your budget—but enthusiast, semi-pro, and freelance voice acting has flourished in much the same way indie game development has.

Now, getting great voice acting in your game is often simply a matter of learning how to connect with freelance artists who are within your means.

Ashe Thurman (of Austin-based one-woman development studio Pixels and Pins) has experience on both sides of the mic, as a freelance voice actor and game developer of original English-language visual novels (OELVNs).

Here, she takes time out from development of her upcoming game, The D (Stands for Demon), and talks with Gamasutra about what she’s learned from her own experiences commissioning enthusiast and freelance voice talent — and how you might apply those learnings to your own small-team or solo-developed games.

Questions and answers have been edited for brevity and clarity.

When and why should a solo or small-team game creator go with voice acting over simple text?

Thurman: Any game can benefit from voice acting. It helps us connect to characters and can make settings feel more dynamic. Though, obviously, this applies to certain genres more than others. Narrative-heavy games (like visual novels and RPGs) definitely see a bigger boon in including voice acting over, say, a platformer or FPS. But it’s not a sort of linchpin element. We’ve all played games without a lick of voice work, and enjoyed them just the same. It’s also important to note that bad voice acting can ruin a game.

“The implementation of voice acting doesn’t have to be an all-or-nothing approach.”

The implementation of voice acting doesn’t have to be an all-or-nothing approach. The voices should enhance—not get in the way of—the gameplay, and there’s not really a one-size-fits-all answer to that. Play games in the genre you’re working in that have voicework and see what people do, see what sounds right to you.

For puzzle or strategy-based games, having voiced cutscenes and unvoiced gameplay sounds fine. It’s an easy way to break those portions of a game apart. In a more action-heavy game, having voiced cutscenes but not including voiced efforts during gameplay can sound strange and inorganic — you’re better off just having in-game voice efforts and carrying them into the cutscene. With visual novels (my developmental metier), it’s not so easy to break up “gameplay” because it’s just the same thing throughout the whole game. If you want to shoot for partial voice in those scenarios, you have to break it down thematically; a voiced scene in an otherwise unvoiced game is going to carry more narrative weight than other scenes.

As long as you maintain internal consistency and it doesn’t feel “weird,” you don’t have to abide by some hard and fast rule. In my game The D (Stands for Demon), for example, I chose to use emotional barks, but only from the main love interests, and have certain pivotal narrative scenes between the female leads fully voiced. However, I was willing to pull the full voice acting scenes if they didn’t work. It played well in the demo, so they’re staying in. My first narrative game Amaranth was fully voiced. Despite it being a short game, it was an incredible amount of work. That’s part of the reason I scaled back for The D.

Pixels and Pins’ The D (Stands for Demon)

When to go the extra mile for voice work comes down to if the team can physically do it, and do it well. Including voice acting comes with casting actors, directing, line editing, voice implementation, and about a thousand other little nitpicky things you don’t even think about until you’re already knee deep in it. It is a gigantic amount of extra work. A lot of pro developers outsource it. There are entire studios and companies specifically centered around casting and recording for video games. That’s how big of a task it can get.

You’re only really ready for the undertaking if you have or can bring in someone (or multiple someones) that will be able to provide that combination of hard and soft skills, and you have the time in your development cycle to not half-ass it.

Something else super important to consider beyond just capability is budget. You and your team may be working “for free,” so to speak, but you’ve gotta pay your voice actors if you’re planning on putting out a paid product. If the money’s not there, the money’s not there.

How do you budget for voice acting?

Budget early, cast late. In theory, voice actors are some of the last people you should be bringing in, but by the nature of voice acting, you’re generally going to be paying them upfront in lump sums. You have to make sure you have room for that in your budget from the very start. And if you’re budget’s going to inform how much voice acting you can actually hire, you’ll have to make sure that’s accounted for in your game design.

“Budget early, cast late.”

Pro studios might offer between $200-300 per hour for a 1-2 hour minimum session, and it’s not unreasonable to expect to be paid between $500-$700 as a main character in a fully voiced, full length visual novel.

At the indie scale, per line or even per word is the most common break down. [It’s difficult] to put a hard number on it, but to give an idea I’ve done commercial work for $5 per 100 words. But that is the absolute lowest I’m willing to go, and that was only as part of a bigger project where the final payment was much higher.

Talk to fellow indie devs. DM a couple friendly looking VAs on twitter, explain what you’re researching, and politely ask what their personal per word rates are. Things like Fiverr and People Per Hour, despite their downfalls, can at least give you an idea of what actors are willing to come to the table for.

How do you communicate to potential players that voice acting is a big selling point?

Even in the OELVN market, you’re going to have a hard time framing voice acting in and of itself as anything beyond just a neat feature. There are a lot of visual novel players, for example, who just turn off voices. Demos and trailers are gonna be spiffier sounding with voice acting, and the game will feel more professional, which will get more people interested, but that’s more of a subconscious thing.

The way you want to use voice acting from a marketing standpoint, is as more of a hype machine. Your voice actors are going to promote the heck out of games they’re in if you give them something to be excited about “Oh look at my character in this! She’s so cute!” Casting announcements generate buzz. “Ohh! Who’s gonna play the villain!? What are they gonna sound like!?” Then you’re going to have the fans of specific voice actors who look into a game just because that voice actor is in it.

Then you’re sort of looking at more of a long term marketing scheme. Having a voice creates a deeper connection to a character. That connection turns to devotion, accelerated, possibly, by that voice’s pre-existing fandom. Devotion turns to obsession, and suddenly it’s the only thing Tumblr’s talking about. So it’s not so much a matter of “hey this game is fully voiced” as it is “hey this game stars John Johnson as a green pigeon. Come listen to him make cooing sounds for ten hours.”

How do you go about finding voice actors as a solo game creator?

There’re are a handful of professional casting options, generally for a price you’re not going to be able to afford. But, yeah, if you’re new to casting and don’t have a pool to choose from you’re going to be doing a lot of just hoping and waiting. Finding voice actors on your own is a bit like fishing. Your project’s the bait, and you throw it out there to see who takes a nibble.

Social media and word of mouth is your friend, in this regard. If a voice actor sees a neat project, they’ll drop it in every discord server and group chat they have. They’ll pass it on to their friends. Hopefully you’ve already got some kind of presence on Twitter, Tumblr, or Facebook, so posting a link to that casting call as a pdf or Google Doc is a good place to start.

You also want to go to the places where the voice actors are. The big three, so to speak, are voiceactingclub.com, Casting Call Club, and the casting call section of Behind the Voice Actors. They’ve all got their pros and cons, but these are all places a lot of really talented voice actors go to find projects that are hiring. There are a couple other sites like Newgrounds and Lemmasoft that have casting call areas, but I haven’t found them particularly useful in recent years.

How do you write a solicitation for pitches?

[Unless] you have the budget to pay for someone else to cast (which, let’s be real, you probably don’t), you’re gonna need to be ready to sell the potential of your project. [Likewise,] I’ve seen a developer just post “VAs send me your demo reels, I’m casting for a thing,” but I don’t find that a particularly effective method. You have to have a big enough reach to get enough useful submissions, and it doesn’t really tell you if an actor can nail the specific character you’re looking for.

“You may feel an instinct to provide an exact example of the voice you’re looking for. Don’t.”

Start with a casting call. In about two paragraphs you need to be able to describe your studio, describe the game (including possible explicit content), succinctly explain the amount and kind of voice work involved (eg, partial, full, efforts, singing, etc), how much you’re paying, and mention your intended distribution platform. Also provide links to other completed projects.

After that be very specific but simple about the technical aspects of audition submissions. One file per character, 3 takes max, mono, delivered as an mp3, labeled “Actor Name-Character Name”, and e-mailed to xyz with Subject “whatever” is pretty standard.

You want to make it easy on your actors to audition. They should be able to see everything they need in one place and not have to jump through hoops to submit for consideration. If you’re drafting this in a document, all the above information should be a page, at most. This tells me right away, as an actor, who you are and how I can expect you to present yourself as a creator. That’s your cover letter.

After that, in this same package, are the audition sides. A side is everything you need to audition for a character. It’ll have a description of the character and lines that you would like your actor to perform to audition. A good character description will have maybe one or two sentences about the character, the vocal pitch (high, low, medium), the vocal quality (nasal, bassy, feminine, tomboy, etc.), and vocal age. Is this a teenager or an adult? Having art available is not required, but it is going to help considerably.

Then you need to pick your side lines very carefully, because you have 3-5 lines to encapsulate everything this character’s going to have to be able to do. If this character’s going to speak French, you better have a French audition line. If they’re going to need to cry, include some crying.

You may feel an instinct to provide an exact example of the voice you’re looking for. Don’t. It doesn’t make things easier for anybody and it tells an actor that you have a specific voice in mind, and nothing but an impression of that voice is going to work for you. That will stilt everybody involved creatively.

What’s the difference between a novice voice actor and a seasoned professional?

You’re going to get what you pay for. The newer your actors, the more likely you’re going to run into lower technical ability, lack of professionalism, and just an overall lower quality deliverable and experience. It’s the same as any profession.

“You’re going to get what you pay for.”

Because it’s not exceptionally difficult to acquire half-decent equipment to build your own home studio, the online voice acting community has become this kind of wonderful hodgepodge of pro, semi-pro, and amateur talent all mingled in together, and doing more than just games. I’ve done games, yes, but most of my work so far is actually in pre-lay animation [voice acting done before the voiced scene is visually complete]. I’ve also done audio dramas, ADR [post-production re-recording], and weird stuff that defies traditional labeling. And that applies to a lot of actors I know. Very few people I know are just a “video game” actor.

Generally, the audition itself will at least reveal new actors. They’ll make very dumb technical mistakes like plosives [causing popping sounds from breath hitting the mic, usually from the letter P], peaking [by exceeding the maximum volume of their microphone], or their equipment just doesn’t sound great.

[After that, the difference is] less immediately apparent. Number one, because it’s really easy to hide lack of practical experience behind good technique and equipment, and number two, actors are just equally good. Talent does not reliably translate into booking gigs, so you can have people who are very good, who are just not landing roles. You also have the opposite problem.

General professionalism is going to end up being one of the biggest indicators of their experience level. Their communications are going to be short sweet and to the point, respecting your time as a producer or director. And most active actors have some kind of social media of some sort. So a hop into Google is going to pull that information up for you.

How do you choose the right person for the right role?

You can start by just listening to the auditions top to bottom, and you’ll find the perfect voice. Then you’ll find another perfect voice. Then you’ll find a third perfect voice. It’s easy to get overwhelmed, so breaking it down into parts helps.

I personally start by picking out any submissions that don’t meet the technical requirements I’m looking for. If their mic is bad or if there’re a lot of plosives or peaking or room noise or what have you. And those get put aside. They’re not out completely, but they get pushed toward the bottom of the stack for consideration.

Next I look at overall vocal quality per character. Oh this voice too young, too old, too high, trying too hard to sound low, whatever, and those get put in the maybes. This is also when I might look at a voice fitting a different character better. This happened on my last game. An actor I sort of knew had auditioned for Greg, but when I listened, I thought, “feels like it could be a good Kaliocanthus voice” so I e-mailed him and asked him to audition for that part if he was up for it. He ended up being perfect and that’s who I ended up casting as that character.

Then it’s all about the acting, and experience is really the best asset to tackle that task. If I have a stoic character, and I get a read that’s not stoic at all, they’re not grasping that character, so it’s gonna get kicked to the “maybes.” I’ll listen to individual line reads, and if they don’t seem to be providing the emotions that I’m aiming for with the lines I gave, they’ll get kicked to the “maybes” because I need to be able to direct someone. If they have trouble picking up the emotional cues I’m trying to give them, that might end up being a point of contention that gets in the way of the final performance.

Then once I have my top choices for each character, I play them all next to each other, and try to look at it holistically for my final decision. I had a situation with a project a couple years ago where I had two female leads that were very close in age and talked to each other a lot. I ended up using my second choice for the older girl because my first choice just sounded way too similar to the actor I wanted to use for the younger girl. So having that last overlook of the entire cast helps considerably to make everything feel like it fits together.

And at this point, I’ve tried very hard to not actually look who the actors are. I want to come in as unbiased as possible. It’s not perfect because I’m going to recognize some people no matter what, but I do the best I can. I use experience as my tie breaker. If I have three voices, and they’re all perfect, that’s when I’m going to go look and see who the actors are and that’ll influence my final decision. And honestly, someone I worked with before might also get a little extra consideration if they also gave me a good audition.

How do you direct voice actors as a solo game creator?

The biggest sticking point you’ll face in voice compared to screen or stage is a super compressed timeline. You’re gonna have a handful of sessions at most to slam all this work out. Usually only one or two. There’s not time to sort of sit with a character and learn them or languish over their characterizations. I try to give out scripts ahead of time for an actor to flip through to get a feel for their character, at least, but I’m also ready to just get that worked out in-session.

So as a voice director your primary job becomes parring all this information down into something easily usable for your actor. There’s all these acting methods—Stanislovski, Hagen, Spolin—that can introduce you to some very interesting vocabulary, but you can also just think of the whole experience as basically a conversation where you’re working out a scene together.

“The biggest sticking point you’ll face in voice compared to screen or stage is a super compressed timeline…there’s not time to sort of sit with a character and learn them or languish over their characterizations.”

You want to provide your actors with enough background to know what kind of world their character is living in, but that doesn’t necessitate these huge lore bombs. They don’t need to know about the Succession War of the 10 Dragons and the Knights of the Templar Order of St. What’shisface. “Magical Medieval Europe in a country torn apart by a hundred years of war” tells an actor pretty much everything they need to know initially about a setting. That gives a time, location, and genre reference right there. And reference existing properties if it’s appropriate. It’s not creatively bankrupt to take little shortcuts like that.

For each scene as you’re developing a character through the narrative, you can just provide additional information as you go. Start with the basics of the character (race, gender, age, profession, general attitude, and their sort of overall through line) then as events occur, you offer how that event affected that character. How they feel about certain characters as they come up. Then just update that information in real time as you go.

You can also withhold information to get the response you want for a scene. Because of the nature of it, a lot of voice actors don’t really bother trying to get the whole story until the full release. Use that to your advantage. One time I let an actress go a whole episode of an audio drama without telling her her sister was a vampire and two more episodes after that before giving her the script where her fiance betrays her. It creates nice, natural drama.

I also like to play “weird/not weird” with the immediate environment the character is in. This person being a wizard? Not weird. A person wearing glasses? Very weird. A man with a robot prosthetic? A little weird because they’re usually too expensive for retail workers.

After providing all that information, I narratively lead them into a scene with something like “It’s a sunny afternoon in your manor, but a courier’s just delivered the message that your brother Eric has died,” then sort of just let them go and show me what they’ve got. If I’m able, I personally like to do it as a sort of table read where I’ll sort of half-act out the other side with the delivery I’ve either already gotten or am going to be looking for. I find that it creates a more natural flow, and lays the skeleton of the scene out really well.

We won’t do the whole scene that way in one go. We’ll section it out. Then we’ll go back and tweak. I want this line a little more angry. Let me get an emphasis on the word “everyone” here. Okay, do it again, but let me hear with an emphasis on “we” this time. Oh this line gave us some trouble. Let’s run it a few times. More. Push it harder. Harder. Louder. Add some crying. Okay pull it back just a little. Start the crying on “John” instead of “baby.” That was good, but I lost the words in the crying there toward the end. Let’s sharpen that up a little bit. And you just adjust adjust adjust and that’s all intuition and practice and being able to hear what you want in your head and trust your actors. You definitely have to be ready to hear a read you weren’t expecting and end up liking it better than what you initially planned.

One of the bigger things I’ve seen creators do is really hard to explain, but it basically boils down to not knowing what you want, or at least not knowing how to explain what you’re looking for. It’s one thing to be directing an actor and decide hey can we got a little higher or a little lighter or a little quicker paced. It’s an entirely different thing to hire an actor for a standard midwestern accent, then suddenly say, oh no we want Russian instead, can you do that? If you know the character like you’re supposed to as a creator, finding a voice for them shouldn’t be a guessing game.

Situations arise where you can’t live direct. Time zones. Schedules. Project sizes. A lot of things get in the way. You have a lot of trust in your actors and your writing at that point. You want to provide written action cues so they understand what’s happening. You want to do what you can to mimic the kind of direction you’d give live as direction notes and know that you’re just not going to be able to play with deliveries or maybe get it exactly the way you’re hearing it in your head. You can ask for revisions, but there’s gonna be a limit to how long that back and forth can before it’s just frustrating for everyone.

And all of this comes with a sort of tacit understanding that there’s going to be multiple takes, hopefully not too many, but that’s so hard to know for sure ahead of time. It’s pretty common practice when you’re doing non live-directed work, to give 3-5 different reads, let the director pick one, then see if you need to do a retake. So there’s a sort of general idea of how long it’s going to take to nail something. You both don’t want to be there forever, and if you picked your actors right, you’re already going to be in good shape in that regard.

What are some problems you can have with freelancers, and how do you deal with them?

As long as you respect their work and their art, freelancers are generally super easy to work with. It’s rare you’re going to have major problems. If you do and you have them consistently, the freelancers might not be the problem.

“When money’s involved, especially, get everything down on paper. That covers both your butts.”

You do want to avoid misunderstandings. When money’s involved, especially, get everything down on paper. That covers both your butts and prevents any a possible he said/she said situation from arising, which can happen. [Similarly,] be easy to get ahold of and take on the burden of communication. An actor shouldn’t have to come ask you when they’re going to get paid, for example. If there’s a delay, tell them.

Sometimes you’re going to run into an actor you just personally don’t totally get along with, and it can create just this little tiny bit of friction. If you’re all professional, though, you can push right past it, no problems. If you do get into a situation where you never want to work with that actor again, you don’t have to.

You might also get into weird schedule conflicts that you don’t know if you can blame on poor planning or not because emergencies do happen to people. I had an actress that I wanted to cast as a lead, and when I offered her the part, she said sure, but she was going to be having major dental work in a month and be out of commision for six weeks afterward. And this was an episodic project that still needed to finish off the last two episodes, so there was just no way to know if it was possible for her to record all that material in time. So I kind of apologized, and explained the situation, and asked if she would be up for playing a smaller part instead that I knew she could finish, and I never heard from he again after that. That kind of thing happens, and you just sort of have a backup. For that particular project, I able to cast my second choice (who I was gonna offer that small part to), and played the small part myself.

Similarly, what are some problematic habits to avoid as a game creator because they create problems for voice actors?

To start, a big, unfortunately common issue, is bringing in actors too early in development. Scripts aren’t ready. It’s still in pre-development. You don’t even have all the main characters solidified. The longer you have to make an actor wait around not doing anything the less enthusiastic they get and the more nebulous your ability to actually finish the project looks.

Be upfront. Don’t surprise an actor with the amount of work you expect or the presence of illicit content or suddenly recasting them. These are all things that have happened to myself or fellow actors.

And as sort of an addendum piece of advice regarding mature content in particular, be very explicit about the nature of the content right off the bat and what each character is going to have to perform. Actors have limits, and they need to know if the project they’re auditioning for might have them do things they’re uncomfortable with. Additionally, you can’t have minors doing explicit sex scenes. Period. A teenage voice actor is generally going to know that, but it’s your job as an adult/creator to double check.

When you hire me as a voice actor, remember that’s all you’ve hired me to do. I’m not your audio engineer. I’m not your editor. I’m not your writer. I should be able to just send you a raw file, and a creator should be okay with that because they’re going to do everything else on their end. I don’t mind cleaning a file of dead space a little bit, or breaking it up into smaller parts. Those are very mechanically simple. But I don’t want to have to mix things for you because you don’t how to level audio.

Be the kind of creator that leads an actor to want to work with you and promote the work their doing with you. I’ve seen content creators that put out good work, but don’t necessarily conduct themselves in a way that I’m particularly fond of, which makes me reticent to seek out working with them.

Posted on Leave a comment

Now Available on Steam – Pure Farming 2018

Pure Farming 2018 is Now Available on Steam!

FARMING GOES GLOBAL! Use the latest technology and state-of-the-art licensed machines to manage all aspects of modern farming. Travel between Europe, Asia, and both Americas to plant region-specific crops such as hemp, coffee beans, and olives. Farm your way thanks to three unique game modes tailored to both simulation veterans and newcomers to farming games.

Posted on Leave a comment

Sponsored: Defining the next generation of user interfaces

Noesis Technologies discusses the mandatory characteristics that any modern user interface middleware must offer, and explains how its own solution, NoesisGUI, addresses them.

Noesis Technologies is a privately held company founded by a passionate team of developers with solid understanding of real time and games technology. Our vision is to provide efficient tools to help other companies deliver high quality experiences. Having that in mind, we created NoesisGUI, our User Interface middleware thoroughly designed to make the very best games.

Find more information about NoesisGUI. Contact Noesis here.

The user interface is one of the most important player experiences in a video game, but it is still something that is constantly overlooked or left till the last minute, and always underestimated in terms of the time it takes to build one. So, at the beginning of a project, not enough team resources are assigned to the task, as it seems trivial compared to the 3D workload needed to develop the game. And some misconstrue GUI design as simply blitting a few HUD rectangles on top of the game.

The UI team is normally comprised of developers not exclusively dedicated to interface tasks, and it is in the middle of the project when the team discovers a catastrophic failure has been made. Sometimes, putting off UI development can take down the entire project.

At Noesis Technologies we suffered from this situation ourselves many times, and although there seemed to be many kinds of middleware out there specifically designed to solve this problem, we observed that none of them fully satisfied us. That’s the reason we decided to create our own, specific solution for video games, NoesisGUI. I want to share with you in this article the key elements that were important for us and the technical challenges we faced during development.

Video game development has so many peculiarities that it is quite difficult to find a UI middleware out there that really helps with this task. Video games are a highly-optimized piece of code where all aspects need to be kept under control.

A state of the art middleware must provide precise control on memory allocations, and efficiently interact with the file system whenever a new asset is required. It is very important for a piece of middleware to keep the degree of intrusiveness as low as possible.

The middleware must never create threads under the hood, it should always provide a renderer-agnostic API, and it should never communicate directly with the GPU. These are key elements to achieve the best integration possible between the game engine and the UI technology.

The rendering algorithm is key to always maintaining a solid framerate. It is not so important to make frames render faster as it is having them rendering constantly and without janks. Even if there are huge amounts of pixels to draw in a single frame, the experience must be smooth, always. Many types of middleware out there cache everything into bitmaps with the assumption that there won’t be many changes per frame. This optimization helps UI render faster in specific cases, when not much change is happening. This is the best-case scenario.

But as soon as a the UI becomes more dynamic, with lots of animation per frame, you start to fall into worst-case scenarios. These scenarios are called performance cliffs. Your video game seems to be moving along fine until it hits one of these worst-case scenarios. A modern and GPU aware UI middleware must be prepared to paint every pixel on every frame.

A well-designed, non-intrusive middleware must always minimally contribute to the final binary size of the video game. Big size libraries are always a symptom of badly designed and bloated architectures. A middleware library must be as lightweight as possible. Static libraries are always the preferred option to eliminate dead code ,and compilation techniques like Whole Program Optimization are a plus. This is one of the reasons middleware that provides source code are always better alternatives than closed solutions.

A lot of available middleware out there forces you to have a virtual machine inside your game just for the UI. That is overkill, because probably you already have a scripting solution working in your game. It is always preferred to have a clean C or even C++ API, but avoiding all the new fancy obscure features that will bloat your code. More layers like C# binding or Lua scripting must be built on top.

Another popular idea that must be avoided is using middleware that contains internet browsers, like WebKit or Gecko, inside. These technologies were not designed to be real-time friendly, and their layout and rendering engines are highly inefficient for video games. Using this category of libraries is the best way to code-bloat your video game.

<StackPanel Background="Orchid"> <Button Content="Start" Command="{Binding StartCommand}" /> <Button Content="Options" Command="{Binding OptionsCommand}" />
​ <Button Content="Quit" Command="{Binding QuitCommand}" />
</StackPanel>

Declarative formats, like HTML, are more compact than the equivalent procedural code. They also establish a clear separation between the markup that defines the UI and the code that makes the application do something.

For example, a designer on your team could design a UI and then hand off the declarative format to the developer to add the procedural code. Even if the designer and the developer are the same person, you can keep your visuals in declarative files and your procedural UI code in source code files. This approach provides a drastic change to middleware using virtual machines where both the presentation and the logic is included in the same asset. In those cases, the code in charge of the UI logic is out of control of a programmer. This is a perfect recipe to end up with chaotic code hard to maintain. A similar situation happens when you allow artists to create shaders using visual tools. Another advantage of using text-based declarative formats is that they are easy to merge and track in source code repositories, as compared to opaque binary formats.

Data binding is the mechanism that provides a simple and powerful way to auto-update data between the model and the user interface. A binding object “glues” two properties together and keeps a channel of communication between them. You can set a Binding once, and then have it do all the synchronization work for the remainder of the video game’s lifetime. This way, the game does not interact directly with the UI.

For this to properly work, some kind of reflection information is needed from the native language. The reflection is the language ability to examine and inspect its own data structure at runtime. Low-level languages like C++ do not expose reflection directly and it is necessary to create a new mechanism on top of the language to emulate this functionality. In higher-level languages, like C#, reflection is part of the core language.

For example, in C++ you could create a mix of macros and templates to emulate reflection like this:

struct Person
{ char name[256]; float life; uint32_t weapons; _REFLECTION(Person) { Prop("Name", &Person::name); Prop("Life", &Person::life); Prop("Weapons", &Person::weapons); }
};

With the reflection information available from the host language, binding to it from the view should be as trivial as doing something like this:

<StackPanel Background="White"> <TextBlock Text="{Binding Name}" />
</StackPanel>

With the rapid proliferation of multicore platforms and the growth in the number of threads that they may support, a multithreading-aware UI middleware is critical to avoid stealing those precious milliseconds from the game’s main thread. As said before, middleware libraries must never create threads under the hood. They must provide entry points to be invoked from the correct thread at the right time. This way, the threading management task is delegated to the game.


The logic phase must be separated from the render phase, so basically the common approach is having an update mechanism at the UI thread where things like layout and animation are calculated and a render mechanism that directly interacts with the GPU from a different thread. Both stages can be executed in parallel. Additionally, the render phase must also be compatible with GPU parallelization when using modern architectures that can execute commands concurrently, like D3D12 for example.

Vector graphics are to user interface as ray-tracing is to 3D rendering. Any modern UI middleware must support vector graphics natively and all the render architecture must be built around them. You should render all controls with vector graphics so that they can be scaled larger or smaller and still look perfect–a unique interface for all supported resolutions in your game, all of them pixel perfect.

Vector graphics allows you to create amazing effects, such as gradient ramp animations, which are hard to achieve when using bitmaps and which greatly contribute to those subtle details that make your interface feel alive. When properly implemented, vector graphics always require less memory than the equivalent bitmap. That is important not only from a storage point of view, but also to reduce bandwidth usage when rendering.

The implementation must be 100 percent GPU friendly to take advantage of current graphic architectures. It is very important to pack all the information efficiently and do all the calculations in shaders.

For example, instead of uploading full ramps to the GPU, it is better if they are mathematically calculated and interpolated on the fly. One of the hardest challenges is to transform from path definitions with curves, like bezier and arcs, to triangles, the GPU native primitive. This process is called tessellation and it is one of the critical aspects of the vector graphics renderer. The tessellation is performed in two steps: the flattening, where curves are converted into straight segments and the triangulation, where contours coming from the flattening step are converted into triangles.

There are many ways to implement the triangularization, from 100 percent GPU implementations to CPU-assisted ones. Many of the 100 percent GPU implementations out there are incomplete from the point of view of supporting complex standards like SVG. SVG allows many tricky features like stroking, dashing and cornering.

In case compatibility with not-so-modern architectures–like mobiles for example–is needed, this step must be performed with the help of the CPU. In this case, the critical point is streaming the information from the CPU to the GPU as efficiently as possible. There are many GL extensions available that can help with this task. This is easier to achieve in unified memory architectures where the same memory space is accessible from any CPU or GPU in the system.

In the end, an acceptable architecture must support an all-dynamic scenario, where all the primitives could be moving at the same time during the same frame. Scenarios like this, that represent the worst-case scenario commented above, totally disallow using any kind of geometry caching mechanism.

High-quality font rendering is a mandatory feature for any UI middleware. Moving all calculations to the GPU to avoid any kind of intermediate textures is the big trend right now. Although this will probably be the case in the near future, right now, with still many low-resolution screen devices out there, the only valid generic approach is backing hinted glyphs into texture atlases. We recommend not to hint along the horizontal axis, only adjusting the glyphs vertically. This way, the original shape of each glyph is better preserved. If you use vertical hinting, then you also need vertical snapping. For the horizontal axis we prefer subpixel positioning.

While the text will have no animation most of the time, it must also move and rotate smoothly. To compensate for the lack of mipmaps (that would be expensive), we recommend oversampling along the horizontal axis. LCD subpixel rendering is becoming less popular these days because this technique has important limitations, and not all LCDs have the same linear ordering of RGB subpixels. Alpha composition is also hard to implement with LCD subpixels.

For bigger font sizes, distance fields are becoming more and more of an option, but they are still far from perfect, as they produce rendering artifacts or they are too complex to implement in shaders. Conventional mesh-based rendering is a good alternative, and it’s easy to implement with minimal runtime cost and no preprocessing at all–an important factor when you are filling texture atlases dynamically.

Almost all available UI middleware prioritizes quality over speed using algorithms that are really hard to translate to GPU at interactive rates. These kind of renderers need to cache results into bitmaps, making dynamic interfaces a worst-case scenario because they need to flush the cache each time and refill it with new content. For video games, a renderer must be able to fill all the pixels per frame avoiding worst-case scenarios like bitmap caching. To achieve that, the antialiasing algorithm must be delegated to the GPU.

Multisample antialiasing (MSAA) is cheap on current GPUs. It is even cheaper on tiling architectures with no extra memory cost. For high DPI displays, very common in mobile devices, antialiasing can even be disabled and still get good quality. With this in mind, the naive technique of just sending the polygons to the GPU is just the best approach. In case antialiasing is really needed and the performance cost is not acceptable, we have experimented with extruding contours of the primitives to get an approximation of pixel coverage for each pixel. The performance is close to not using MSAA, and the results are quite good, although shapes are slightly altered.

An efficient integration between the middleware renderer and the video game engine is not an easy task. Graphics APIs are state machines. In an inefficient integration, the GPU state is saved before rendering, and restored after that. Saving and restoring the GPU state can be really expensive. Also many times the UI must be integrated into the game world as a mesh that blends with the 3D world, or for example, when the user interface is being used in virtual reality environments. In all these cases the integration is not trivial, and the middleware must be graphics API agnostic, and it must offer appropriate callbacks to be implemented by the game. For the same reasons, we are not expecting the middleware to allocate memory on its own, or to open files or create threads. It should never interact directly with the GPU.

The middleware must send the primitives to be drawn using these callbacks. The set of primitives to be rendered must be properly sorted by shader to minimize the number of batches sent to the GPU. The algorithm is very similar to those used in 3D engines where objects are sorted by material and textures. Avoiding redundant state changes is also relevant for the UI. This is an important detail that is often overlooked.

The middleware must also offer callbacks compatible with modern graphics API like D3D12, Metal, or Vulkan where GPU commands can be dispatched in parallel from different threads or jobs, and provide an easy way to be integrated into those jobs. That way, the 3D scene and the UI can dispatch commands in parallel.

Visually-engaging user interfaces that feature rich and fluid animations is a mandatory feature for any middleware. Achieving a high framerate, using graphics hardware and operating in a separate UI thread are all desired characteristics of the animation system. Objects are animated by applying modifications to their individual properties. By just animating a background color or applying an animated transformation, you can create dramatic screen transitions or provide helpful visual cues. Subtle animations in each UI element are the key for having modern look-like interfaces. The middleware must offer enough performance to animate everything that is being shown on screen without slowing down the framerate.

A user interface middleware is more than just a renderer and a scripting language. It must offer a rich framework of classes that designers can choose from. Elements like Button, CheckBox, Label, ListBox, ComboBox, Menu, TreeView, ToolBar, ProgressBar, Slider, TextBox, or PasswordBox must be part of a rich palette of controls to be used or extended from. The middleware must offer well defined mechanisms to extend the provided elements by adding new functionality or just by aggregating existing ones.

How to measure and arrange collections of elements inside their parent panel is defined with the term “Layout”. It is an intensive and recursive process that defines where elements are positioned before rendering. Each panel exposes different layout behaviors. For example, you may need horizontal or vertical stacking, or corner anchoring or resolution-independent scaling. The layout system and the palette of panels offered by the middleware must be powerful enough to handle the different resolutions needed by the game.

Besides extending, a robust UI middleware must offer theming mechanisms to allow developers and designers to create visually-compelling effects and to create a consistent appearance for their game. A strong styling and templating model is necessary to allow maintenance and sharing of the appearance within and among games. A clean separation between the logic and the presentation is mandatory. This means that designers can work on the appearance of the game at the same time developers work on the programming logic.

After years of hard work and many UI libraries implemented, we decided to develop our own solution, NoesisGUI, which addresses all the points commented in this article. We also took time to analyze and experiment with the majority of products available in the market. We were unable to find any solution satisfying these commented key points. NoesisGUI is the result of more than five years of development, it is compatible with desktop, mobiles and consoles, and it was built to be lightweight and fast from the ground up. We also made it compatible with XAML and all its designing tools, like Microsoft Blend.

Please, feel free to comment about any area described in this article. We will be glad to hear about your opinion.

Find more information about NoesisGUI. Contact Noesis here.

Posted on Leave a comment

Devs weigh in on the best ways to use (but not abuse) procedural generation

Like so much in game dev, the decision to use procedural generation in your game is a double-edged sword.

While it provides a theoretically unlimited amount of content, you sacrifice the advantages of your game being bespoke and hand-crafted.

The goal, then, is to craft tools that can turn out procedural content in near infinite quantities and configurations while making that content feel as close to homemade as possible.

To explore some different approaches to that problem, Gamasutra reached out to a number of devs with experience working with procedural tools and asked about how they thought about different methods of content generation, and how they avoid making procedural content that feels banal and recycled. 

Why procedural content?

First and most obvious is the question of why choose procedural content in the first place. For Tanya Short, co-editor of the book Procedural Generation in Video Games, the answer is simple — it’s just more interesting, it’s a way to build something that even its creator doesn’t fully recognize.

“Procedural content and handmade content each have their own strengths and weaknesses. To me, the main difference is that I enjoy creating procedural content far more, as a designer, because my creation has more opportunities to surprise me.”

“Most newbies in procedural generation are afraid of chaos and random nonsense, when what they should be worried about is generating too much samey, boring blandness.”

For Justin Ma of Subset Games, who utilizes procedural content both in their breakout hit FTL as well as their new title Into the Breach, it’s not necessary to exclusively choose one or another, either procedural or handmade.

“Both approaches to content creation have a place in the games world and both can lead to great results if executed well,” Ma says. “Speaking as a player of games, I am generally less interested in completely linear games that test if you can achieve the goals laid forth by a designer exactly how they intended you to play.  I’m much more interested in exploring dynamic systems that give the player a lot of leeway to express themselves.  However the most compelling games to me often use both procedural and handmade content.”

The key focus is always to ensure that at the end you have a product that’s engaging and fun, above all other considerations; procedural content, properly deployed, can help with that.

“You COULD simulate complex procedural systems that are amazing from a technical perspective, but if they’re not aiding in making the game enjoyable, it’s kind of a waste.  For example, we found that heavily restricting the procedural generated elements was necessary to make Into the Breach‘s battles less frustrating. Instead, we relied on creating simple mechanics that can interact with each other in a variety of interesting and dynamic ways.”

That solution is a perfect illustration of how bespoke and randomly generated content aren’t necessarily oppositional, and can actually be used to amplify each other’s strengths.

“With Into the Breach we found that the amount of fun you’d have with the game was highly dependent on where enemies stood in relation to their environment.  If an enemy is in a corner attacking a building, it’s much more likely that you’ll be unable to neutralize the threat without causing collateral damage to the buildings,” continues Ma. 

“This feeling of having no potential solution was unacceptable to us so it became clear that we would need to create environments that would minimize the chances of this happening. The types of problematic terrain configurations was quite varied so we eventually decided that we could not rely on procedural generation of maps.  Instead we opted to manually design the basic layout of each map and use procedural generation for less important aspects of maps (such as forests and sand locations) as well as elements that were unique to specific missions (such as where key structures were spawned).  The result was a lot of variety of maps while avoiding problematic layouts as much as possible – however if you play enough you will likely start recognizing key features.”

Zabir Hoque, an engineer at System Era Softworks whose 2016 release Astroneer leans heavily on procedurally generated elements, agrees that the best results are often produced when procedural and bespoke content is married together.

“Generally, handmade content allows for a level of polish that is very hard to replicate with an entirely procedural system,” Hoque says. “The goal with procedural content is to provide novelty, but we want some level of familiarity so the player isn’t just experiencing chaos. Artists and designers can be extremely convincing, and building procedural systems that can match that level of care and quality can be quite hard. That being said, having procedural systems enables scenarios that would otherwise be impossible – for instance, in the case of Astroneer, creating varying game planets that provide alternate experiences each play through.”

In generating the landscapes of Astroneer, Hoque and his team use handmade assets as landmarks to break up procedural content and make the whole seem less random and unfamiliar. They treat these handmade pieces as having an “area of influence” that extends around them into the procedural content and influences it in terms of player experience. For Hoque, building procedural tools isn’t just a way to save labor or reduce the amount of artistic content required to build a game; quite the contrary, it’s a way to empower artists.

“From a technical side I think the most important thing to consider when building procedural systems is how to allow for artistic control at various authoring levels,” Hoque explains. “You want to allow for authoring a rule set for general use cases, and then provide enough vectors of configuration so users can exercise artistic control when needed.“

And Hoque echoes Ma’s sentiments about ensuring that the final product is entertaining and enjoyable, about serving the player first.

“With our terrain system, we could just use Perlin noise everywhere in the terrain with random values and say ‘Look! It’s different every time!’ but this is what leads to the feeling of bland repetition. Instead, we try to think of how the player will play the game and when they’ll seek out novelty, and that is where we try to introduce variation.”

Putting procedural content to work

Once you have landed on procedural generation, the challenge becomes masking it’s randomly crafted nature as much as possible, or unifying it with handmade assets.

“Honestly, there are so many considerations it’s hard to sum up,” Short says. “Basically, most newbies in procedural generation are afraid of chaos and random nonsense, when what they should be worried about is generating too much samey, boring blandness.”

Short plans to address this very issue in her GDC talk on March 22nd, where she’ll refer to the work of designers like Dr. Emily Short (who develops middleware artificial intelligence and security solutions at Spirit AI) and Jason Grinblat (of Caves of Qud fame).

“It’s a bit pretentious to call your algorithm a co-author, but it’s also extremely true, because often your A.I. friend will surprise you with the outputs, in both good and bad ways. It’s the joy and the horror of the medium.”

“It’s a scary, giant topic that I don’t think anyone has solved, so I hope nobody in the audience brings tomatoes. But as a quick summary, I recommend first analyzing your system inputs AND outputs for orthogonality — the less overlapping your different modular pieces and systems can be, and the less overlapping their expressions can be, not only are they more likely to be coherent, but also the more interesting your potential user experience will be.”

Short is also quick to point out that it’s often more important that you design intelligent systems to link and organize your content than it is to agonize over the content itself.

She emphasizes the “high value in writers creating a higher order of meaning that correlates data according to themes. It’s common to carefully prune your molecules, but less common to actually give those molecules alignments, so that they’re more or less likely to find each other.”

By way of example, Short talks about creating tools to define a character’s job, drawing on and arranging a matrix of data points in logical ways.

“The systems therein invent new organizations or symbology or founding myths on the fly, in order to inform the occupation. But rather than just rolling dice and smooshing 100 random things together, it would be more interesting for players if you first picked an archetype for the character, according to some logic of your universe — perhaps according to playing card suits, say, Hearts, Spades, Diamonds, and Clubs,” Short says. 

“If a character is a Diamond-type character, you’ll still roll dice to smoosh some things together, but your algorithm will also be able to predict more of an interesting, coherent narrative for the character, informed by their Diamond nature (perhaps a storyline to do with wealth, politics, success, or greed). No matter how you curate the corpus you draw from, it will never have the same kind of boutique variety-of-coherency that this higher order of meaning imposes.”

She’s experimented with these types of high order arrangements, but says there’s work left to be done.

“We dipped our toes in this, in the way we implemented Moon Hunters‘ hero myths,” she explains, “according to archetypes of Cunning or Bravery, but it was more limited and provided fewer surprising combinations than a ‘suit’ based approach would have. A core lesson that literally all procedural designers repeat like a mantra is to iterate, iterate, iterate — same as the rest of game development, you have to see how it goes, look at the results, and go back to tweak and refine your work. It’s a bit pretentious to call your algorithm a co-author, but it’s also extremely true, because often your A.I. friend will surprise you with the outputs, in both good and bad ways. It’s the joy and the horror of the medium.”

For Ma, the first step in properly employing procedural generation is try to avoid drawing attention to the fact that it’s been randomly generated, which means eliminating things like repeated, identical assets and blending content of different provenance.

“One frequently used way to avoid a ‘bland’ generation is to mix in hand-made content,” he says. “Games have varying ways of doing this – using a handmade map with elements within it randomized (Into the Breach); dividing the map into randomly selected smaller pieces which could contain random elements within (Spelunky); using set-pieces that get worked into a random maps (the temples in Minecraft); or entirely generated maps that have different ‘biomes’ within, each with their own principles of generation (hallways of a ‘prison’ intersecting with a lake in Brogue).”

Short, however, makes the case for letting some of your seams show as a way of being transparent with the player, for how letting the user know that they’re interacting with machine generated content can itself be a source of amusement.

“I actually think designers shouldn’t worry as much about making their procedural content too sensible or ‘seamless’, at least such that it would pass for human-made — it’s actually desirable for a player to know that they are engaging in a machine-co-authored system, and to enjoy it in that way,” she says. “Someday maybe we’ll exhaust this and it’ll be more interesting for machines to impersonate people for things other than interfering in elections, but for now, there’s lots of low-hanging fruit when the player knows they are spinning a kaleidoscope rather than viewing a static image. It can be intrinsically fun to learn a system’s patterns and understand how they work together.”

Ma believes that those patterns work best when they’re established by a joint effort between designer and machine.

“We highly restrict how much randomness plays a role in key designs like pacing and difficulty.  In FTL that meant determining the exact number of events that had fights vs events that gave free rewards in each sector (though their locations within the sector were completely random),” he recalls. “With Into the Breach, the exact enemy types that spawn are random but we restrict the numbers of certain enemy types that can spawn at any given time. You can rely on randomness to create variety, but you still have to highly curate what occurs to ensure the game is fun and fair.”

In a similar way, and akin to how Short talks about including high level ordering of random content, Hoque and the Astroneer team are careful to ensure that the procedural content in their game is defined by authored limiters. For instance, when crafting the area around the player’s spawn point, they begin with a subset of terrain, rolling hills and gentle slopes, and only as the player moves further away from that starting point do they introduce more wild and varied elements like cliffs, caves, and mountains.

“The hope is while you are progressing in Astroneer, and resources that you need are on a 100 foot tall plateau on a different planet, you use the experiences with the more gentle early terrain to figure out what to do,” says Hoque. “On the technical side we take two general rules of Perlin & Billow noise and blend them using artistic discretion based on play-tests. When the player has explored for a bit and has enough experience to leave the home planet, other planets then use these noise functions in different ways to create more difficult and varied terrain. In this way, we are combining simpler general rules to provide a more tuned experience.”

Naturally, some elements lend themselves better to procedural construction than others. Terrain will always be easier to machine generate than living things.

“With noise, we can generate fairly unique terrain over entire planets, but procedurally generating something like creatures is a whole different process,” Hoque continues. “Humans are so familiar with organic creatures that it is a very tough problem to procedurally generate creatures that are modeled, shaded, and animated in convincing ways.”

The procedural toolbox

The upside of the proliferation of procedural content for designers is that there are a number of tools available to create it (and a fair amount of institutional wisdom exists around how to properly employ those tools). That experience is hard won, though.

“Our original terrain authoring tool was very prescriptive and constrained even though it was procedural,” Hoque says. “The terrain was composed of a stack of modifiers that would continue to manipulate and refine the base planet’s sphere shape. The stack of modifiers had baked in notions of where certain terrain features would be expressed. Thus, each novel terrain feature required revisiting the authoring tool to support its reordering within the modifier stack. For instance, the cave tunneling terrain modifier couldn’t be inverted and placed above ground to form bridges without writing a bespoke modifier that did just that.”

“Honestly, any game tool can be used towards procedural content — Unity, GameMaker, whatever. Go nuts. Be brave.”

A system based around modifier stacks presented a serious impediment to the kind of creativity Hoque was hoping to engender. When the game first hit Early Access on Steam, the team decided to tear down their planet generating tech and start again from scratch.

“The new terrain system is a flexible shader graph very similar to Unreal Engine’s material shader graphs. However, instead of shading 2D pixels with RGB data, we shade 3D voxels with voxel occupancy and object placement probability data. This approach allows us to sculpt the terrain to the artist’s needs so long as it can be expressed in a functional manner.”

Astroneer’s terrain shader graph editor was constructed inside the Unreal Engine, which Hoque calls a “great framework that allows us to build custom tools for the game.”

“The terrain shader graph is JIT compiled to machine code using LLVM to support live editing scenarios. This is similar to the process of a website JIT compiling javascript code on the fly to improve performance,” he continues. “When we actually package and ship a build, the terrain shader graph is ‘Ahead of Time Compiled’ for maximum performance. This allows full function inlining and constant propagation. This is essentially the same process that C++ engine code goes through. The 2017 GDC presentation on how Horizon Zero Dawn spawned objects was a huge influence on how our tech works, except we are doing those things in 3D.”

In closing, Short reminds fellow devs that not all the tools for procedural generation are complicated development engines or software suites — there are lots of options for those looking to dip into the waters of procedural generation.

“When I contributed to G.P. Lackey’s Unknown Peoples twitterbot, I used Notepad!” Short says. “Cheap Bots Done Quick (using Tracery) is a fantastic technology, great for dipping your toes into the first steps of smooshing random text together and assembling human-readable grammars. It won’t let you do the higher-order meaning stuff I mentioned before, but it’s a good way to get started. We used it for Unknown Peoples — and with G.P.’s permission, here’s its source code.”

“Honestly, any game tool can be used towards procedural content — Unity, GameMaker, whatever,” she adds. “Go nuts. Be brave.”

Posted on Leave a comment

Video Game Deep Cuts: A Jump Scare Soda Machine

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.


[Video Game Deep Cuts is a weekly newsletter from curator/video game industry ‘watcher’ Simon Carless, rounding up the best longread & standout articles & videos about games, every weekend.

This week’s highlights include devs on the effectiveness of jump scares in horror game, a unique in-game soda machine documenting project, and LOTS more.

Well, that darn Game Developers Conference show which I help organize is coming up fast – here’s the video trailer for the IGF Awards and for the Game Developers Choice Awards btw. So there’s not a bunch of time for metacommentary. But there’s a lot of good writing/video out there for games, and really happy to highlight it – even in busy times!

Until next time,
– Simon, curator.]

——————

Richard Garfield, Skaff Elias, And Valve On Balancing, Community, And Tournaments In Artifact (Suriel Vazquez / Game Informer – ARTICLE)
“I had the chance to sit down and chat with a few of the people behind the game, including Magic: The Gathering designers Richard Garfield and Skaff Elias, and ask [about] how the game will handle its economy, how Valve gauges success, communication, and balance, and how tournaments will be an integral part of Artifact. [SIMON’S NOTE: Valve making a game is always news, and there’s a 19-minute Gabe Newell speech about ‘the state of Valve’ to digest too!]”

The Return Of Combat Chess (Noclip Bonus Level / YouTube – VIDEO)
“Welcome to Noclip Bonus Level – the show where we take a break from making documentaries for five minutes, to analyze the world of video games around us. In our first episode, Danny takes a look at the history of the first-person-shooter and celebrates the return of combat chess.”

How an injured dev learned to make games without using his hands (Brittany Vincent / Gamasutra – ARTICLE)
“Losing the use of your hands can be a devastating blow to both your career and your lifestyle. However, there are people out there who face that very issue every day and are able to persevere and adapt. Austin-based developer Rusty Moyher is one such person.”

Ubisoft is using AI to catch bugs in games before devs make them (Matt Kamen / Wired UK – ARTICLE)
“At the recent Ubisoft Developer Conference in Montreal, the French gaming company unveiled a new AI assistant for its developers. Dubbed Commit Assistant, the goal of the AI system is to catch bugs before they’re ever committed into code, saving developers time and reducing the number of flaws that make it into a game before release.”

How is this speedrun possible? Super Mario Bros. World Record Explained(Bismuth / YouTube – VIDEO)
“[SIMON’S NOTE: since a lot of speedrun lore/methodology is super arcane, this speedrun ‘explainer’ videos end up being super interesting/cool, and this one is no exception.]”

In San Francisco, Video Games Are Classical Music’s New Frontier (Nastia Voynovskaya / KQED – ARTICLE)
“Clint Bajakian, whose composition and sound design credits include major game franchises like Star Wars, God of War, and Indiana Jones, has observed an explosion of audio jobs in the gaming industry over the course of his 26-year career. But he says most conservatories are still focused primarily on live performance, despite students’ growing interest in tech.”

Metal Gear Survive: The Kotaku Review (Heather Alexandra / Kotaku – ARTICLE)
“Metal Gear Survive is wreckage of the long battle between the series’ lead creator, Hideo Kojima, and Konami, his long-time employer who ditched him three years ago. Kojima split with Konami after a lifetime of making Metal Gear games, going back to the first in 1987. Survive is Konami’s attempt to make a new one without him. It’s a bad game that’s more fascinating than I expected. [SIMON’S NOTE: this is a _really_ good review – keep reading.]”

Playing Video Games With My Son Isn’t What I Thought It Would Be (David Cole / The Cut – ARTICLE)
“Most of my childhood friends were also boys with single mothers, and video games played a similarly contradictory role in their lives. Yes, they were indulgences. But they were also babysitters, being one of the more reliable methods for a group of young boys to be left unattended for hours with little risk of injury or property damage.”

Guardian Heroes – 1996 Developer Interview (Game Hihyou / Shmuplations – ARTICLE)
“Of particular note is HAN’s antipathy towards the adversarial nature of vs. fighting games and his drive to lead them in a more communal direction, a shift many developers continue to attempt, and struggle to achieve, to this day. [SIMON’S NOTE: so glad Shmuplations is back – some really good translations of rare Japanese dev interviews here.]”

Stealth Game Changers | The Swindle’s Imperfect Burglary (Stealth Doc / YouTube – VIDEO)
“Where many Stealth Games condition players to aim for nothing less than perfection, The Swindle asks, “Can you live with your mistakes?””

Anachronox (Errant Signal / YouTube – VIDEO)
“Anachronox! You know! The last of the three big Ion Storm original properties; the one that came out just before they closed the Ion Storm Dallas offices.  The one with the comedy and the JRPG mechanics and the vaguely Ron Gilbert sense of humor.  The one made by Tom Hall!  That Anachronox.  No?  Well, listen up…”

Donald Trump Takes on the Nonexistent Link Between Violent Video Games and Mass Shootings (Simon Parkin / New Yorker – ARTICLE)
“In December 21, 2012, Wayne LaPierre, the executive vice-president of the National Rifle Association, which has lent its name to video games including N.R.A. High Power Competition… delivered a speech in which he laid the blame for school shootings in the United States at the feet of various pop-cultural transgressors. [SIMON’S NOTE: good summing up of the pre-meeting – Kotaku has a useful follow-up with video of the ‘violent game!’ montage played during the meeting. Bonus: Ars Technica retrospective on the Biden-era ‘games & violence’ meeting at the White House.]”

Into the Breach’s interface was a nightmare to make and the key to its greatness (Alex Wiltshire / RockPaperShotgun – ARTICLE)
““I think we were resolved to having a UI nightmare from the beginning,” says Matthew Davis, co-designer and programmer of Into the Breach. “When we decided we had to show what every enemy was doing every single turn, and that every action needed to be clear, it became clear how bad that nightmare would be,” says Justin Ma, its co-designer and artist.”

What horror game creators think about jump scares (Samuel Roberts / PC Gamer – ARTICLE)
“Jump scares are often considered cheap scares, and to some extent that reputation is justified. I’ve played games and seen films where the scares are exhausting rather than exciting, and they’re not much more sophisticated than someone jumping out of a cupboard and shouting ‘boo’.”

Is it worth cutting out Steam to sell indie games direct? (Samuel Horti / PC Gamer – ARTICLE)
“We spoke to the creators of One Hour One Life, SpyParty, and Overland about cutting out the middle-man. [SIMON’S NOTE: Jason Rohrer’s new blog postsomewhat claims victory for the off-Steam model – which I think can work if you’re a ‘known’ dev with a unique proposition. His dev revenue curve will trend down over time – but that’s an impressive start with a different curve to most Steam releases for sure!]” 

Game Developers Need A Union (Dante Douglas / Paste – ARTICLE)
“The general understanding is that games are first-and-foremost a consumerist medium, not an artistic one. This viewpoint posits that videogames are a product, not a process. The creative labor of development is flattened in order to serve the narrative of games being a simple transaction—give money, get game. [SIMON’S NOTE: I did mention to Dante on Twitter that the IGDA’s Jen MacLean has a roundtable at GDC on this massively complex subject.]”

This Professor Has Documented 2,000 Soda Machines in Video Games (Patrick Klepek / Waypoint – ARTICLE)
“In 2016, Marshall University professor Jason Morrissette was playing Batman: Arkham Knight. While sneaking around the shadows, Morrissette stumbled upon a soda machine. Like many games, Akrham Knight doesn’t feature any real-life soda products; that’d cost money. Instead, the developers simply made up their own: Sparkle Fizz.”

Chuchel is a whimsical adventure game that’s guaranteed to make you smile(Andrew Webster / The Verge – ARTICLE)
“The first thing you do in Chuchel, a new comedic adventure game, is feed a green blob in a top hat some water. Give it enough, and it will spew out liquid like a faucet, right into the mouth of a strange fuzzy orange ball. [SIMON’S NOTE: one of my ‘hey, new game!’ links for the week – from the Machinarium crew!]”

The Old, Weird America of Where the Water Tastes Like Wine (Garrett Martin / Paste – ARTICLE)
“Where the Water Tastes Like Wine explores that unsteady territory between history and legend. It’s a game built on stories and how people tell them, how they can start out rooted in a body’s real experience and then grow more elaborate and fanciful over time.”

IGDA exec dir Jennifer MacLean defends video games on MSNBC (MSNBC / YouTube – ARTICLE)
“On March 8 2018, Jennifer MacLean, Executive Director of the International Game Developers Association, appeared on MSNBC to discuss the day’s Trump administration meeting on video games and gun violence. [SIMON’S NOTE: Thanks to Mark DeLoura for clipping this onto YouTube – it’s nice to see an eloquent game association head on TV explaining things, right?]”

——————

[REMINDER: you can sign up to receive this newsletter every weekend at tinyletter.com/vgdeepcuts – we crosspost to Gamasutra later on Sunday, but get it first via newsletter! Story tips and comments can be emailed to vgdeepcuts@simoncarless.com. MINI-DISCLOSURE: Simon is one of the organizers of GDC and Gamasutra & an advisor to indie publisher No More Robots, so you may sometimes see links from those entities in his picks. Or not!]

Posted on Leave a comment

Daily Deal – ASTRONEER, 20% Off

Today’s update unveils Dota Plus, a new monthly subscription service designed to help you get the most out of every Dota 2 match you play.

Dota Plus is an evolution of the Battle Pass. In the past we released two types of Battle Passes, ones that revolved around the Majors, and one around The International. As a result of the recent introduction of the Pro Circuit, we’ve replaced the Majors Battle Passes with a new type of service that doesn’t depend on a specific start and end date, and one that we can continually add features and content to over time.

This reimagining of the Majors Battle Pass will be an ongoing, uninterrupted service filled with features that provide both progression and opportunities for improvement. Hero Leveling offers you a way to make progress every match while earning Shards, a new currency that can be used to unlock rewards. Test your skills by completing hundreds of new Challenges of varying difficulties. Use the Plus Assistant to help you make build decisions by utilizing real-time item and ability suggestions. These suggestions are based on data gathered from millions of recent games at each skill bracket, ensuring your builds stay current in the ever-evolving meta. Better understand your playstyle by using new graphs and analysis tools both during the match and after it’s finished.

This update also includes the return of the Battle Cup. All Dota Plus members will have free weekly access to play in the weekend tournaments, while non-members can still purchase tickets for $0.99 to participate.

Everything Dota Plus has to offer is available now for $3.99 per month. Receive a discount if you sign up for a six- or twelve-month subscription. Want to gift Dota Plus to a friend? You can purchase gift memberships that will remain active for a fixed number of months.

Check out the Dota 2 store page for more information on all of the features included in your subscription.

Posted on Leave a comment

Daily Deal – Gremlins, Inc., 50% Off

Today’s update unveils Dota Plus, a new monthly subscription service designed to help you get the most out of every Dota 2 match you play.

Dota Plus is an evolution of the Battle Pass. In the past we released two types of Battle Passes, ones that revolved around the Majors, and one around The International. As a result of the recent introduction of the Pro Circuit, we’ve replaced the Majors Battle Passes with a new type of service that doesn’t depend on a specific start and end date, and one that we can continually add features and content to over time.

This reimagining of the Majors Battle Pass will be an ongoing, uninterrupted service filled with features that provide both progression and opportunities for improvement. Hero Leveling offers you a way to make progress every match while earning Shards, a new currency that can be used to unlock rewards. Test your skills by completing hundreds of new Challenges of varying difficulties. Use the Plus Assistant to help you make build decisions by utilizing real-time item and ability suggestions. These suggestions are based on data gathered from millions of recent games at each skill bracket, ensuring your builds stay current in the ever-evolving meta. Better understand your playstyle by using new graphs and analysis tools both during the match and after it’s finished.

This update also includes the return of the Battle Cup. All Dota Plus members will have free weekly access to play in the weekend tournaments, while non-members can still purchase tickets for $0.99 to participate.

Everything Dota Plus has to offer is available now for $3.99 per month. Receive a discount if you sign up for a six- or twelve-month subscription. Want to gift Dota Plus to a friend? You can purchase gift memberships that will remain active for a fixed number of months.

Check out the Dota 2 store page for more information on all of the features included in your subscription.

Posted on Leave a comment

Now you can muck around with the Build engine successor: Build2

Roughly twelve years after he started working on it, Build engine creator Ken Silverman has published a working version of its unfinished successor: Build2

This is a big deal given what an impact games built on the original Build engine (including Duke Nukem 3D, Shadow Warrior, Blood, and William Shatner’s TekWar) had on the game industry in the ’90s.

Now, devs can get an idea of what Build2 games might have looked like — and how things might have gone differently if Silverman had seen the project through. He’s published a build of the engine that anyone can download from his website, along with an editor and some script samples.

Also, YouTuber CuteFloor published a video (embedded above) which affords you a visual walkthrough of the engine and what it can do.

According to Silverman, he started working on Build2 in the summer of 2006, and actually brought it to a summer camp in 2007 to A) teach kids how to make 3D games and B) get some hands-on feedback. This continued for another two years until enrollment dropped off at the camp, and Silverman apparently lost interest.

His decision to now share it online comes not long after 3D Realms resurrected Silverman’s Build engine by releasing a playable preview of Ion Maiden, a new game being developed by Voidpoint on the original ’90s engine.

FUN FACT: There’s a bunch of other interesting odds and ends on Silverman’s website, including KenVex, a failed second successor to Build that Silverman also published online this week.