Posted on Leave a comment

How Tetris was discovered in America

Experienced video game historian Norman Caruso has published a new video to his Gaming Historian YouTube channel which breaks down the story of Tetris.

The video covers the history of its creator Alexey Pajitnov and contextualizing the political and social climate in which the game’s was developed under. 

What follows is an interesting look into how Tetris eventually came to America, and how it was marketed and published for a Western audience.

In 1984 the Communist Soviet Union struggles with a stagnant economy, the decision to boycott the Summer Olympics, and rising tensions with America from the Cold War.

While the two nations were divided, Russian programmer Alexey Pajitnov works diligently on what will later be Tetris.

Originally developed during his spare time at work, Pajitnov creates a prototype of the game using an IBM PC and sennds free copies to friends and colleagues.

From there, it spread across the Soviet Union and eventually made its way to the United States. But how was it discovered?

Posted on Leave a comment

Human: Fall Flat dev credits community feedback for success

– Creator of Human: Fall Flat Tomas Sakalauskas on how player feedback helped in refining the game.

In an interview with Rock Paper Shotgun published today, developer Tomas Sakalauskas shares some lessons learned after publishing Human: Fall Flat, which may be useful to developers wanting to see their games succeed after launch.

Human: Fall Flat is a physics-based puzzle game that launched in 2016 and has recently appeared in the top ten grossing games of the week on Steam.

It’s continued to remain in the top ten, which may be in part because of Sakalauskas’ eagerness in listening to feedback. 

The addition of co-op mode in 2017 was influenced by the community around Human: Fall Flat. “Everything happened spontaneously, reacting to the feedback of play-testers, players, streamers, and viewers of those streamers,” explains Sakalauskas. “I didn’t plan co-op mode initially but it was added because of numerous requests.”

But how did Human: Fall Flat manage to remain visible to players after launch? “Steam is surprisingly good at promoting the games to potential players long after the release,” Sakalauskas says.

Sales did well enough to support his one-man studio, and he notes that continuing to show appreciation for his fan base ultimately proved to work out. “I’ve kept adding things and reacting to the feedback. Looking back, it seems to be the right strategy.”

“It’s the same advice that I’d give to anyone who is just starting their game: listen to your players,” Sakalauskas advises for any developers wanting to see their games live on after release. “Players were never taught game design – they simply feel there’s something good or bad but it’s not their job to come up with the solution.”

Not all feedback is heeded, however. “You are the game designer and you make decisions. I try steering the design towards player requests, building upon the positive feedback and avoiding pitfalls exposed by the negative complaints.” 

Be sure to read the entire interview with Sakalauskas over at Rock Paper Shotgun.

Posted on Leave a comment

Alt.Ctrl.GDC Showcase: Windgolf

The 2018 Game Developer’s Conference will feature an exhibition called Alt.Ctrl.GDC dedicated to games that use alternative control schemes and interactions. Gamasutra will be talking to the developers of each of the games that have been selected for the showcase. 

Windgolf offers players a relaxing round of mini golf, only instead of using a putter, they’ll be blowing on the ball. By breathing into two different tubes and rotating a screen to get a better orientation on the ball, they’ll be able to slowly guide the ball to the hole. Not that taking their eyes off the ball to blow it in the right direction makes that easy.

With this new take on minigolf being playable at GDC’s Alt.Ctrl.GDC exhibit, Gamasutra reached out to developer Pepijn Willekens to learn about the creation of this wind-based golf game, and some of the challenges that came up using such an imprecise input like breath.

What’s your name, and what was your role on this project?

I’m Pepijn Willekens (@PepijnWillekens). I developed all parts of Windgolf. I did the programming, electronics, 3D, gamedesign, etc. I did get some carpentry help from Thomas Devillé though (Who runs a business building custom made wooden arcade cabinets called Devillé Arcade). 

How do you describe your innovative controller to someone who’s completely unfamiliar with it?

In Windgolf, you play minigolf, but instead of having a club, you control the wind by blowing into the machine, which creates wind in the game. You are like a god of nature that only cares about getting a ball into a hole 😀

What’s your background in making games?

I am currently in my graduation year of a Bachelor in Multimedia Technology. Over the past years I became more involved in the Belgian game industry. I co-organised an international indie game festival called Screenshake 2017, co-organised a Global Game Jam location for 3 years, and co-organised monthly game developer meetups for the past 1.5 years. I haven’t released any commercial games yet, but I’m currently (also) working on a mobile puzzle game called Boa Bonanza.

What development tools did you use to build Windgolf?

Windgolf uses Arduino for the sensors, which sends this data to Unity over USB. 

What physical materials did you use to make it?

The arcade is build out of wood. For the blow sensors, I used 2 small speakers that I use as if they were microphones. So, you actually blow directly on the speakers’ membrane which creates a tiny electrical signal. I amplify this so the Arduino can read it. 

For the rotation, I currently use a rotary encoder, but I am planning to change this into a gyroscope or accelerometer. I am also looking into taking some parts from a watercooker to be able to power the upper rotating part of the arcade. The game itself runs on a laptop with an external screen.

How much time have you spent working on the game?

I estimate that the prototype that is visible in the video is made in approximately 12 days, spread out of 2.5 months. The game will have progressed a lot by the time I show it in the GDC expo, though.

How did you come up with the concept?

Windgolf started off as a school assignment where we had to “combine sound and Arduino and Unity”. I loved this assignment, so I went way further with the project that the school assignment required.

What difficulties did you face in combining blowing and a moving screen to create challenging puzzles?

Windgolf is a rather clumsy game to play, because rotating around is quite a physical effort compared to other games where you look around by moving your mouse or joystick. The players reflexes are much slower. You have to look away from the screen to blow, after which you realize that the ball wil roll just next to the hole, and you have to get to the other side of it in time. The game is aimed for an expo context, so people are free to try to control the arcade in a combined effort 😀

Windgolf allows players to change their perspective by moving the screen as well. What drew you to add this element on top of blowing on the ball?

I imagine the arcade as a sort of window into another world. If you would want to blow a ball in real life in one direction, and after that in another, you would also have to move around the ball to position yourself first.

Was harnessing breath a difficult thing to do as a gameplay mechanic? How did you get it just right? 

Blowing isn’t the most precise input method, but in Windgolf, your precision in rotation is more important than how hard you blow. So, it’s more a matter of giving player a clear sense of what is happening.

How do you think standard interfaces and controllers will change over the next five or ten years?

I think that haptic feedback in controllers will be given much more attention. Nothing is as satisfying as moving your finger over the pads of Vive/Steam controllers, or feeling the virtual balls roll in your Nintendo switch controllers (in 1-2-Switch). Game developers have been mastering the art of wobbling and squeezing characters, shaking screens, and all sorts of audio-visual feedback, but being to decide what the player feels when they make their his input to a game allows for a direct way of teaching your player how they interact with it. It allows for more experimentation through standard interfaces. Imagine that the touchscreens of future phones would offer the same feedback of the Vive/Steam controller pads. 

Posted on Leave a comment

Don’t Miss: Fine young cannibals—Developing Early Access hit, The Forest

Steam users seem to have quite the affinity for survival-themed games, lately.

To name a few strong performers, there’s DayZ, Rust, 7 Days to Die. They’re all sold as in-development games, via Steam Early Access, and all of them give the player one main goal: to survive.

Last week, almost out of nowhere, a new survival contender showed up: The Forest from Endnight Games. The title shot to the very top of the charts within two hours of its launch.

The reason the game went straight to the top has partly to do with how the game already had a fan base before the alpha ever went on sale. The buzz for the game actually started with a trailer that accompanied the game’s successful Steam Greenlight campaign last summer. The game was promptly greenlit, and a second trailer helped the game gain even more interest.

“Apparently people are really into chopping down trees, and getting chased by cannibals,” laughs Mike Mellor, who’s working on A.I., design and programming for The Forest.

It also helps that the game is quite pretty, which can be attributed to the heavily-modified Unity engine and the fact that the four people who comprise Endnight come from the visual effects industry for film. The visuals and the survival premise have given players something to rally behind.

The game started as a one-person project by Endnight’s Ben Falcone. Burnt out by the movie industry, he left and took a year to make the iPad survival game, called Endnight. Following that project he started The Forest about a year ago.

“I wanted to do something bigger and use procedural generation,” he says. “I thought the forest would be a really good place to do that.”

The Forest actually begins in an airplane. In the next seat, a child is sleeping peacefully on your arm, when a sudden jolt occurs, and next thing you know, the front half of the plane is missing and you’re headed for a crash landing in the forest.

As you awake, a mysterious tribesman stands above you, and takes the child away. You lose consciousness again, then regain it and begin your fight for survival against hunger, cold, and some extremely creepy, cannibalistic enemies who tend to show up and do odd and violent things, when all you want to do is finish building your log cabin.

Having interacted directly with the community during the Greenlight campaign, the team at Endnight seems comfortable, if not overworked, in developing a paid alpha. “We’ve had lots of 48-hour-straight ‘work days,’” says Falcone.

But Early Access was the best option for the team. Anna Terekhova, who works on interface, design, level layout and other areas of the game, says, “As a team of four people, it’s really difficult to release a full game, and get all the testing that we need. Early Access allows us to get the community involved and get the community involved and keep it going.”

The Forest

Falcone says, “Spending all that time to get it to a final version is pretty much impossible for us. But we knew we could get a first alpha build.”

“At first we were all pretty nervous, but it’s pretty awesome to get a big list of bugs. Just getting a list of a hundred bugs, and just knocking out the bugs we missed is great,” he adds. “You do have some people who do expect it to be a finished, polished game. And it is strange when some sites review alpha copies of games, which is a bit bizarre. Reviewing the 0.01 version is probably not the best thing to do.”

As other small-team game developers have realized, effective implementation of procedurally generated content can make up for the lack of studio personnel. The Forest uses a mix of procedural and non-procedural tools.

“The original intention was to procedurally generate everything, which we did early on,” says Falcone. “The issue is that it’s really boring. I find that any procedural game you play, you just start to see patterns and it’s just not as interesting when you can handcraft something and really dress it up.”

But that doesn’t mean everything in the game is handcrafted and hand-placed. The studio uses a system called Greeble, which is a way of procedurally placing objects, based on simple rules. This makes objects and items appear hand-placed, when it’s actually done procedurally. At the beginning of the game, the plane crashes at a random spot on the map as well, each time you play, giving a sense of variety and providing more replayability.

“We can have really distinct-looking areas. We’re such a small team, obviously we couldn’t lay out all that stuff. So it works out really well,” Falcone says.

Endnight’s a very young studio, and is approaching game development challenges in its own way, and questioning design tropes constantly.

“The biggest [challenge] is just making a game and taking a different path than a lot of games would,” says Falcone. “There was a point about three months ago where I was saying to Anna, ‘I wish we had just copied another game.’ Everything we do, we’ve been trying to rethink how it’s being done.” Features such as the 3D crafting system and the game’s survival handbook went through hundreds of iterations before their current form. Everything is being made with VR compatibility in mind as well, adding an extra component of difficulty.

Falcone also says that getting the player to simply do something has been a challenge, especially when you want a truly open design that isn’t heavily based on missions and quests.

“The reason games bombard you with missions and pickups and collectibles and things to do is because otherwise the world feels dead. What do you do in a world where you’re not being forced to do anything?” he says. People would play the game and just not know what to do after stepping out of the plane’s wreckage at the start.

“Until we had enough elements in the game to make it interesting — lizards climbing trees, rabbits running around, plants all cuttable and destroyable — there were a few scary points where we thought maybe we should turn it into Far Cry 3,” he laughs.

Endnight is instead following its own vision for The Forest, and now that it’s selling well, the studio has a commitment to its players. Falcone expects the “final” version to launch around six months from now. In the meantime, the studio will be adding features, squashing bugs and learning the lessons taught by launching a game on Steam Early Access.

“That’s what we thought, that it could be as buggy as hell, but as long as the vision for the game came through, I felt that we’d be in a good place,” says Falcone. “And I feel like we got there, where it is super, super buggy, sometimes broken, but you can actually see the vision. I think for everyone in the audience, that’s the most important thing.”

Posted on Leave a comment

Daily Deal – Late Shift, 40% Off

Total War Saga: Thrones of Britannia is Now Available for Pre-Purchase on Steam and is 10% off!*

Thrones of Britannia is a standalone Total War game which will challenge you to re-write a critical moment in history, one that will come to define the future of modern Britain. With ten playable factions, you must build and defend a kingdom to the glory of Anglo-Saxons, Gaelic clans, Welsh tribes or Viking settlers. Forge alliances, manage burgeoning settlements, raise armies and embark on campaigns of conquest across the most detailed Total War map to date.

*Offer ends at 10AM Pacific Time

Posted on Leave a comment

Designing Total War: Warhammer II to handle tons of units and massive battles

For venerable U.K.-based studio Creative Assembly, Total War: Warhammer II represents a massively ambitious undertaking.

Not only is it one of the deepest strategy games currently available, the recent (free) Mortal Worlds update integrates almost all of the content from the first game into the newest iteration — including the stitching together of two massive, sprawling campaign maps, and bringing all of the diverse factions, units, and leaders from the original forward into the sequel.

The team also recently launched the first major DLC expansion, Rise of the Tomb Kings, which incorporates yet another race and another handful of factions to bolt onto the staggering 117 already present in the game. 

Managing all this content and making it behave together is a huge undertaking and one, according to Creative Assembly’s Al Bickham, and Scott Pitkethly, that’s taught their team a number of important lessons along the way.

“So much extra effort went into the development of Warhammer II,” Bickham says, “with attendant changes in so many areas. The databases and codebases for both games had diverged to the point where merging content from one branch to the other caused us a great deal of errors.”

Teaching an old dog new tricks

This was a particularly nettlesome issue when the team was integrating the Norsca Race Pack from the first game, which added a host of marauding northern tribes. The divergent codebases “pushed back our plans to integrate Norsca into Warhammer II and Mortal Empires, as we now have to re-implement all those individual content, data and code components by hand over time,” adds Bickham. “An annoying – if useful – lesson learned!”

“The databases and codebases for both games had diverged to the point where merging content from one branch to the other caused us a great deal of errors.”

It’s a very specific example of the sort of issues that trouble such a huge project, but some of the initial hurdles were much more banal and commonplace.

One of the big challenges of a game of Warhammer II’s scale is rendering thousands and thousands of units on screen at once, preferably with as little a performance hit as possible. According to Pitkethly, in designing the game they tackled the problem with a multipronged solution.

“[Dynamic] environment and unit LODs [Levels Of Detail] obviously help a great deal from a GPU-load perspective, and we have some clever LOD techniques to reduce our resource requirements,” he said. But level of detail solutions only partially resolved the issue, and in a game that enables dynamic zooming and camera rotation, additional techniques were mandatory. “Total War: Warhammer also saw us switch to multicore processing in earnest, so we distribute tasks across a greater number of CPU threads, and run them in parallel.”

The real secret weapon in Creative Assembly’s arsenal, however, seems to be a brilliant sort of prognostication.

“In Total War, the battle logic and animation pipelines (both of which are very CPU intensive) have also been decoupled from the display, allowing them to run simultaneously,” Pikethly says. “In a battle scene, the logic generates the ‘future’ whilst the display renders the ‘now’. This future logic-state can be created over many display frames, allowing us to calculate complex interactions without impacting frame rate.”

Another difficult quandary presented itself as the team was designing and integrating units of different types. When your game includes flying units, infantry, cavalry, artillery, commanders, and fantastical beasts ranging from animate stone giants to ravenous horned demons, it becomes a delicate, complex balancing act ensuring they all interact appropriately with one another on the battlefield (and sell the damage done to them by other units in believable ways). Unsurprisingly, it’s another problem that requires a suite of solutions.

“There’s quite a number of techniques we employ, often with bespoke approaches to cater for different situations,” says Pikethly. “Some solutions are based on animation-set choices for example; we may author – or choose to cull – specific animation-sets for units fighting atop, say, a castle wall, in order to avoid weird-looking situations or extreme movements which would otherwise be fine on open ground. We may then capitalize on certain situations by authoring matched fighting animations between specific entities.”

For Bickham, sprinkling in these little moments is a crucial way to make players feel as though they’re witnessing a real, dynamic fight, as opposed to a lot of complex math.

“A good example in Warhammer II is when a Carnasaur [basically a giant T-Rex!] fights a grounded dragon,” he says. “In general they’ll trigger their usual attack animations, but in rare situations you’ll see the dragon clutch the Carnasaur and try in vain to fly off with the thing, before being dragged back down to the ground and counterattacked with a hefty head-butt. These occurrences are rare enough to make them feel special and epic when they do happen.”          

The issues surrounding unit interaction are further compounded, however, when uneven terrain or structures are involved, particularly in the case of some of the game’s enormous, signature sieges. There’s a colossal amount of data to process in these situations, particularly, Pikethly says, because of the way Warhammer II models combat.

Matching up what the engine sees with what the player sees

“We physically simulate every projectile fired by each individual soldier or entity, and accuracy is variable according to the firer’s quality and the target’s situation. And every building and terrain-feature provides a potential obstacle. Calculating those shots en-masse is expensive, so we have to be clever about how we request such checks in order to keep performance high.”

And it’s not just a matter of properly optimizing, managing, and queueing combat result checks, there’s also the issue of line of sight. Pikethly asserts that one of the biggest problems for their team is the difference between what a player sees and what the engine is actually rendering.

“We physically simulate every projectile fired by each individual soldier or entity, and accuracy is variable according to the firer’s quality and the target’s situation…Calculating those shots en-masse is expensive, so we have to be clever about how we request such checks in order to keep performance high.”

“From a sky-eye view with your whole army visible, a tiny wrinkle in the landscape between your riflemen and their quarry may be barely discernible; to the player, the riflemen appear to have line of sight. From a soldier’s-eye view however, that wrinkle may actually be a low hill the height of a bus-stop, blocking their line of sight. So when the player orders them to fire, they begin moving forward into a position where they can draw a bead on the target, leaving the player baffled as to why they’re marching blithely towards the enemy instead of firing as ordered!”

And that marching bit presents its own difficulties in the form of AI pathfinding, specifically when a unit has to pick its way through broken terrain or a complicated urban environment.

“Our Battle AI relies on hint-lines (invisible to the player) around hills, for example, to tell it that the hill is a desirable feature to hold, and where best to position its forces,” Pikethly adds. “Likewise, a poorly-connected network of city-streets might be impossible for the AI to negotiate. The player may see an open junction between two roads, but the AI sees an impassable area, and when ordered to move, may take what seems to the player to be an inefficient route.”

The solution to all of these problems comes largely down to design. The advantage of hand-crafting maps rather than relying on procedural generation is that, while more production work is involved, every piece of terrain can be sculpted and placed and assessed according to how it fits with every other piece of terrain, structure, or pathway. Linking streets in an orderly, sensible way is key, as is placing impassable terrain so that it won’t halt an AI unit marching around it, or avoiding terrain features that present significant LoS challenges but aren’t visible from a bird’s-eye perspective.

“Battle-maps and the terrain features on them require diligence in execution,” Bickham says. “We code the AI to understand how to negotiate terrain and buildings and are very mindful with map design from both a gameplay and a technical perspective.”

Throughout the process of integrating old with new, it’s clear that Creative Assembly can chalk up a good bit of its success to careful planning — as well as thoughtful analysis of what’s most key in giving players the intended experience (in this case, simulating believable, massive skirmishes) and making design choices that support those priorities. 

Posted on Leave a comment

Video: Double Fine dev on how to be an effective team lead

Being the team lead of a project can be daunting, especially if there aren’t any clear directions on what that actually means. After being made team lead, what happens next? 

In this 2016 GDC talk, Double Fine’s Oliver Franzke shares the lessons he learned while settling into a leadership position at Double Fine and provides practical advice to will help new team leads get started in their new role.

Franzke also discusses the struggle he faced while adjusting to his new leadership role at Double Fine. Eventually after talking to fellow developers, it quickly became clear to him that most team leads were expected to pick up the necessary skillset of the job themselves. 

Devs who are new to leadership positions may appreciate that they can now watch the talk completely free via the official GDC YouTube channel

In addition to this presentation, the GDC Vault and its accompanying YouTube channel offers numerous other free videos, audio recordings, and slides from many of the recent Game Developers Conference events, and the service offers even more members-only content for GDC Vault subscribers.

Those who purchased All Access passes to recent events like GDC or VRDC already have full access to GDC Vault, and interested parties can apply for the individual subscription via a GDC Vault subscription page. Group subscriptions are also available: game-related schools and development studios who sign up for GDC Vault Studio Subscriptions can receive access for their entire office or company by contacting staff via the GDC Vault group subscription page. Finally, current subscribers with access issues can contact GDC Vault technical support.

Gamasutra and GDC are sibling organizations under parent UBM Americas.

Posted on Leave a comment

Fire Emblem Heroes one-year anniversary celebration

Fire Emblem Heroes one-year anniversary celebration

The Fire Emblem Heroes game is heading toward its one-year anniversary! To thank you for your support, we are holding an in-game celebration with five different events.

1. Summoning Focus: One-Year-Anniversary Hero Fest!: 2/1/18 at 11pm to 2/8/18 at 10:59pm
To celebrate the one-year anniversary of the Fire Emblem Heroes game, four dependable Heroes are part of a 5-star summoning focus! Also, the initial summoning rate for 5-star Focus Heroes will be set to 5%. For new summoning events, the first time you summon, you won’t have to use Orbs. Check it out from the Summon menu.

2. One-Year-Anniversary Present: 2/1/18 at 11pm to 3/7/18 at 10:59pm
During this time, anyone who logs in can receive a one-time only present of 50 Orbs.

3. One-Year-Anniversary Celebration Log-In Bonus: 2/1/18 at 11pm to 2/16/18 at 10:59pm
During this time, you can receive a Log-In Bonus of 2 Orbs up to 10 times. That’s up to 20 Orbs in total.

4. Daily Special Maps: 2/1/18 at 11pm to 2/25/18 at 11pm
Daily Special Maps will be sent out every day for 25 days. There are two difficulties for each: Normal and Hard. You can earn up to 50 Orbs by playing.

5. Double EXP and SP Event: 2/1/18 at 11pm to 2/8/18 at 10:59pm
During this time, the EXP and SP you earn in battle will be doubled.

A Hero Rises: 2/1/18 at 7pm to 2/19/18 at 6:59pm
Now that the Choose Your Legends event is over, it’s time to decide who the number-one Hero is in all of Fire Emblem Heroes! Get ready for a new event: A Hero Rises. The top Hero will be given to players as an in-game present at a later date as a 5-star unit. Check out the Choose Your Legends Round 2 website, and help your favorite Hero rise to the top.

Illusory Dungeon: 2/8/18 at 11pm to 2/22/18 at 10:59pm
A new in-game event, Tap Battle: Illusory Dungeon, will be here!
Illusory Dungeon is a simple battle game in which you time your taps on the screen to defeat enemies. You can even use Heroes who have not yet been leveled up, so feel free to choose your four favorite Heroes.

What is waiting 100 floors down? Head toward the depths of the mists on the deepest floor of the dungeon.

During this time there will also be daily quests where you can earn different rewards each day. Starting on 2/11/18 at 11pm, there will be two types of Tap Battle quests.

Thanks again to everyone playing Fire Emblem Heroes, and Happy Anniversary! For more information about Fire Emblem Heroes, visit the official site.

Game Rated:

Fantasy Violence
Suggestive Themes
Partial Nudity
Digital Purchases

Posted on Leave a comment

Alt.Ctrl.GDC Showcase: Too Many Captains

The 2018 Game Developer’s Conference ;will feature an exhibition called Alt.Ctrl.GDC dedicated to games that use alternative control schemes and interactions. Gamasutra will be talking to the developers of each of the games that have been selected for the showcase. 

Too Many Captains (And Not Enough Wire) will involve a lot of wire-swapping (and communication between friends) to safely pilot a starship through dangerous territory. One player is given access to a control panel and only enough wire to power a few ship functions (shields, weapons, etc), but only knows what to hook up based on what other players tell them to as they won’t be looking at the screen. This turns ship piloting into a frantic game of teamwork and communication as players work their way through space. 

Gamasutra spoke with Avi Romanoff and Giada Sun, developers of Too Many Captains (And Not Enough Wire) to talk about the thoughts that went into the game’s wire-plugging controller, and how it inspires players to get creative with their communication and work as a team.

What’s your name, and what was your role on this project?

We’re Avi Romanoff and Giada Sun. We’re both current students at Carnegie Mellon University, and we’re answering the questions together. We are the whole team so far, and enjoy having fluid roles. We’ve both done programming, interface and art design, game design, and helped build and prototype the physical controls.

How do you describe your innovative controller to someone who’s completely unfamiliar with it?

Think a 1900s telephone switchboard meets the controls on a ship from Star Trek. Colorful, lots of lights, but the main interaction is plugging and ka-thunking thick wires into holes.

What’s your background in making games?

Romanoff: I think I’ve always been into games. As a kid, I remember making my own trading card games out of index cards. In middle school, I set up a website that hosted flash games so my friends and I could play them in the computer lab, bypassing our school’s content filter. In college, I found myself building weird physical/digital installation games for my school’s Spring Carnival. Too Many Captains is definitely the most time I’ve spent working on a game… and I’m really liking it!

Sun: I was totally addicted to Warcraft III’s World Editor when I was in high school. I spent most of my time after school on making several DotA-like games. I really enjoyed making games because it’s a comprehensive art — you need to think about the story, how the characters look, how the players will act, and how to design enjoyable mechanics that make people want to keep playing. It was also an opportunity to learn graphic design and some coding. Too Many Captains really evokes tons of my high school memories for me.

What development tools did you use to build Too Many Captains (And Not Enough Wire)?

Architecturally, our game is very modular, and everything — including the hardware — is written in TypeScript. The “engine” and on-screen display is actually a website, using HTML5 canvas and the Phaser framework. The engineer’s controller is a Raspberry Pi, which controls a WS2812 (“NeoPixel”) light strip, buttons, and the wires. Since hardware is hard™, we also wrote a software “emulator” of our custom controller in React.

All of the different components of our game communicate with each other via WebSockets (with a super simple Node.js server serving as a message broker). We write code in vim and VS Code, and design assets in Illustrator and Sketch.

What physical materials did you use to make it?

We’re still experimenting with materials, but our current prototype controller is basically a lot of black foam core, wood, and tape. That worked surprisingly well, but it’s kind of falling apart, so we’re looking into laser-cutting and 3D printing components, such as MDF and acrylic.

How much time have you spent working on the game?

We started working on the project in mid-November 2017. Actually, we submitted our application to Alt.Ctrl.GDC about 3 weeks from the inception of the project. About 10 days before the submission deadline, someone suggested our project might be a good fit, but that we probably couldn’t make it given the short notice. We took that as a challenge… we basically lived a small hackerspace for a week.

How did you come up with the concept?

We were taking a class called Experimental Game Design taught by Paolo Pedercini (aka Molleindustria), and our assignment was to come up with non-traditional control schemes in Unity. Neither of us really knew Unity, and we’re both a little weird, so instead we brainstormed some wacky ideas like slapping computers, shouting at inanimate objects, swapping control schemes mid-game, and eventually plugging wires in.

Originally, we wanted the game to have an instruction manual — with dozens of different wire combinations. We played around with a lot of different ideas, and scrapped everything and started over a few times.

What drew you to tie controls to someone who cannot see the screen? Who isn’t even with the other players?

We knew we wanted to do something with asymmetric controls, and drew a lot of inspiration from games like Keep Talking and Nobody Explodes, Lovers in a Dangerous Spacetime, and the Star Trek movies (where the engineers are in the lower decks of the ship, communicating with the bridge via intercom). Having one player not being able to see the screen was a really fun and challenging constraint to embrace. Having that player not be in the same room ended up not being so important to the gameplay, and we’ve relaxed that requirement with our latest prototypes.

What difficulties came up in designing the game where problems can quickly be seen and communicated to the player outside the room?

The biggest challenge was designing gameplay that was both easy to jump into, but that was still challenging and skill-based once you mastered the basics. We also ran into another problem — ensuring both the captains and the engineer had sufficient agency. We experimented with microphones, manuals, and LED indicator lights. What we found was that the challenge of communicating was the fun part. So instead of trying to impose restrictions or structure on how the players communicated, we tried to make it as flexible as possible. “Say whatever you need to say!” This leads to lots of different strategies, like appointing a single head-communicator, having each captain focus on different priorities, developing shorthand, etc. It turns out that our game is a lot more about communicating and teamwork than actually piloting a starship… and we love that.

What thoughts went into the wire-plugging controller? How did you create frantic play with it without making it overwhelming? How did you design the game around this unique interface?

We spend a lot of time thinking about pacing. Although our game is essentially a side-scrolling shoot-em-up, the pacing is way slower than most games in that genre. The elements on the screen, including the players’ ship, move very slowly. And yet, yeah, the game is often super frantic. Part of this is because controlling the ship effectively is really hard. Even though there’s only 3 buttons and 3 wires, the objectives of the captains tend to change constantly. Engineers quickly learn how to move up with one hand and swap wires with another. Captains discover that they can spend more time firing weapons or repairing if they raise and lower their shields quickly, and fire a precise number of bullets to destroy a particular enemy.

It’s precisely because the controls are so cumbersome that players find ways to speed things up. The game is actually pretty slow — it’s the players that bring the frenzy.

How do you think standard interfaces and controllers will change over the next five or ten years?

We definitely think alternative physical controllers are the way of the future. Virtual and augmented reality technology is making it possible to create realistic and immersive virtual worlds, with complex real-world objects. At the same time, it’s becoming increasingly frustrating to have meaningful, realistic interactions in these worlds.

When all you have is pixels on a 2D screen, hitting a button to drive a car or using a joystick to aim a bow and arrow doesn’t seem so far-fetched. But when you’re in an immersive virtual world, it becomes totally unsatisfying. People want to touch, bend, twist, throw, plug, shake, etc. As our game worlds become more visually complex, the interaction models need to catch up. We see alternative and indie controllers playing a big part of that.

We’re also very excited that it’s becoming easier than ever to build alternative controllers. Inexpensive hobbyist platforms like Raspberry Pi and Arduino and maker-centric fabrication techniques like 3D printing and laser cutting are leveling the playing field for controller design. You don’t need to spend thousands of dollars on injection molding and circuit board design. You just need a laptop, some parts from Amazon, and maybe access to a makerspace. Or maybe you just need lots of cardboard!