Posted on Leave a comment

Free Weekend – Tom Clancy’s Ghost Recon® Wildlands

Play Tom Clancy’s Ghost Recon® Wildlands for FREE starting now through Sunday at 1PM Pacific Time. You can also pickup Tom Clancy’s Ghost Recon® Wildlands at 50% off the regular price!*

If you already have Steam installed, click here to install or play Tom Clancy’s Ghost Recon® Wildlands. If you don’t have Steam, you can download it here.

*Offer ends Monday at 10AM Pacific Time

Posted on Leave a comment

Blog: Scaling dedicated game servers with Kubernetes – Part 3

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.


Originally posted on compoundtheory.com.

This is part three of a five-part series on scaling game servers with Kubernetes.

In the previous two posts we looked at hosting dedicated game servers on Kubernetes and measuring and limiting their memory and CPU resources. In this instalment we look at how we can use the CPU information from the previous post to determine when we need to scale up our Kubernetes cluster because we’ve run out of room for more game servers as our player base increases.

Separating Apps and Game Servers

The first step we should make before starting to write code to increase the size of the Kubernetes cluster, is to separate our applications — such as match makers, the game server controllers, and the soon-to-be-written node scaler — onto different nodes in the cluster than where the game servers would be running. This has several benefits:

  1. The resource usage of our applications is now going to have no effect on the game servers, as they are on different machines. This means that if the matchmaker has a CPU spike for some reason, there is an extra barrier to ensure there is no way it could unduly affect a dedicated game server in play.
  2. It makes scaling up and down capacity for dedicated game servers easier – as we only need to look at game server usage across a specific set of nodes, rather than all potential containers across the entire cluster.
  3. We can use bigger machines with more CPU cores and memory for the game server nodes, and smaller machines with less cores and memory for the controller applications as they need less resources, in this instance. We essentially are able to pick the right size of machine for the job at hand. This is gives us great flexibility while still being cost effective.

Kubernetes makes setting up a heterogenous cluster relatively straightforward and gives us the tools to specify where Pods are scheduled within the cluster – via the power of Node Selectors on our Pods. It’s worth noting that that there is also a more sophisticated Node Affinity feature in beta, but we don’t need it for this example, so we’ll ignore its extra complexity for now. To get started, we need to assign labels (a set of key-value pairs) to the nodes in our cluster. This is exactly the same as you would have seen if you’ve ever created Pods with Deployments and exposed them with Services, but applied to nodes instead. I’m using Google Cloud Platform’s Container Engine, and it uses Node Pools to apply labels to nodes in the cluster as they are created and set up heterogenous clusters – but you can also do similar things on other cloud providers, as well as directly through the Kubernetes API or the command line client. In this example, I added the labels role:apps and role:game-server to the appropriate nodes in my cluster. We can then add a nodeSelector option to our Kubernetes configurations to control which nodes in the cluster Pods are scheduled onto.

For example, here is the configuration for the matchmaker application, where you can see the nodeSelector set to role:apps to ensure it has container instances created only on the application nodes (those tagged with the “apps” role).

apiVersion: extensions/v1beta1
kind: Deployment
metadata: name: matchmaker
spec: replicas: 5 template: metadata: labels: role: matchmaker-server spec: nodeSelector: role: apps # here is the node selector containers: - name: matchmaker image: gcr.io/soccer/matchmaker ports: - containerPort: 8080

By the same token, we can adjust the configuration from the previous article to make all the dedicated game server Pods schedule just on the machines we specifically designated for them, i.e. those tagged with role: game-server:

apiVersion: v1
kind: Pod
metadata: generateName: "game-"
spec: hostNetwork: true restartPolicy: Never nodeSelector: role: game-server # here is the node selector containers: - name: soccer-server image: gcr.io/soccer/soccer-server:0.1 env: - name: SESSION_NAME valueFrom: fieldRef: fieldPath: metadata.name resources: limits: cpu: "0.1"

Note that in my sample code, I use the Kubernetes API to provide a configuration identical to the one above, but the yaml version is easier to understand, and it is the format we’ve been using throughout this series.

A Strategy for Scaling Up

Kubernetes on cloud providers tends to come with automated scaling capabilities, such as the Google Cloud Platform Cluster Autoscaler, but since they are generally built for stateless applications, and our dedicated game servers store the game simulation in memory, they won’t work in this case. However, with the tools that Kubernetes gives us, it’s not particularly difficult to build our own custom Kubernetes cluster autoscaler! Scaling up and down the nodes in a Kubernetes cluster probably makes more sense for a cloud environment, since we only want to pay for the resources that we need/use. If we were running in our own premises, it may make less sense to change the size of our Kubernetes cluster, and we could just run a large cluster(s) across all the machines we own and leave them at a static size, since adding and removing physical machines is far more onerous than on the Cloud and wouldn’t necessarily save us money since we own/lease the machines for much longer periods. There are multiple potential strategies for determining when you want to scale up the number of nodes in your cluster, but for this example we’ll keep things relatively simple:

  • Define a minimum and maximum number of nodes for game servers, and make sure we are within that limit.
  • Use CPU resource capacity and usage as our metric to track how many dedicated game servers we can fit on a node in our cluster (in this example we’re going to assume we always have enough memory).
  • Define a buffer of CPU capacity for a set number of game servers at all times in the cluster. I.e. add more nodes if at any point you couldn’t add n number of servers to the cluster without running out of CPU resources in the cluster at any point in time.
  • Whenever a new dedicated game server is started, calculate if we need to add a new node in the cluster because the CPU capacity across the nodes is under the buffer amount.
  • As a fail-safe, every n seconds, also calculate if we need to add a new node to the cluster because the measured CPU capacity resources are under the buffer.

Creating a Node Scaler

The node scaler essentially runs an event loop to carry out the strategy outlined above. Using Go in combination with the native Kubernetes Go client library makes this relatively straightforward to implement, as you can see below in the Start() function of my node scaler. Note that I’ve removed most of the error handling and other boilerplate to make the event loop clearer, but the original code is here if you are interested.

// Start the HTTP server on the given port
func (s *Server) Start() error { // Access Kubernetes and return a client s.cs, _ = kube.ClientSet() // ... there be more code here ... // Use the K8s client's watcher channels to see game server events gw, _ := s.newGameWatcher() gw.start() // async loop around either the tick, or the event stream // and then scaleNodes() if either occur. go func() { log.Print("[Info][Start] Starting node scaling...") tick := time.Tick(s.tick) // ^^^ MAIN EVENT LOOP HERE ^^^ for { select { case <-gw.events: log.Print("[Info][Scaling] Received Event, Scaling...") s.scaleNodes() case <-tick: log.Printf("[Info][Scaling] Tick of %#v, Scaling...", tick) s.scaleNodes() } } }() // Start the HTTP server return errors.Wrap(s.srv.ListenAndServe(), "Error starting server")
}

For those of you who aren’t as familiar with Go, let’s break this down a little bit:

  1. kube.ClientSet() – we have a small piece of utility code, which returns to us a Kubernetes ClientSet that gives us access to the Kubernetes API of the cluster that we are running on.
  2. gw, _ := s.newGameWatcher – Kubernetes has APIs that allow you to watch for changes across the cluster. In this particular case, the code here returns a data structure containing a Go Channel (essentially a blocking-queue), specifically gw.events, that will return a value whenever a Pod for a game is added or deleted in the cluster.  Look here for the full source for the gameWatcher.
  3. tick := time.Tick(s.tick) – this creates another Go Channel that blocks until a given time, in this case 10 seconds, and then returns a value. If you would like to look at it, here is the reference for time.Tick.
  4. The main event loop is under the “// ^^^ MAIN EVENT LOOP HERE ^^^” comment. Within this code block is a select statement. This essentially declares that the system will block until either the gw.events channel or the tick channel (firing every 10s) returns a value, and then execute s.scaleNodes(). This means that a scaleNodes command will fire whenever a game server is added/removed or every 10 seconds.
  5. s.scaleNodes() – run the scale node strategy as outlined above.

Within s.scaleNodes() we query the CPU limits that we set on each Pod, as well as the total CPU available on each Kubernetes node within the cluster, through the Kubernetes API. We can see the configured CPU limits in the Pod specification via the Rest API and Go Client, which gives us the ability to track how much CPU each of our game servers is taking up, as well as any of the Kubernetes management Pods that may also exist on the node. Through the Node specification, the Go client can also track the amount of CPU capacity available in each node. From here it is a case of summing up the amount of CPU used by Pods, subtracting it from the capacity for each node, and then determining if one or more nodes need to be added to the cluster, such that we can maintain that buffer space for new game servers to be created in. If you dig into the code in this example, you’ll see that we are using the APIs on Google Cloud Platform to add new nodes to the cluster. The APIs that are provided for Google Compute Engine Managed Instance Groups allow us to add (and remove) instances from the Nodepool in the Kubernetes cluster. That being said, any cloud provider will have similar APIs to let you do the same thing, and here you can see the interface we’ve defined to abstract this implementation detail in such a way that it could be easily modified to work with another provider.

Deploying the Node Scaler

Below you can see the deployment YAML for the node scaler. As you can see, environment variables are used to set all the configuration options, including:

  • Which nodes in the cluster should be managed
  • How much CPU each dedicated game server needs
  • The minimum and maximum number of nodes
  • How much buffer should exist at all times
apiVersion: extensions/v1beta1
kind: Deployment
metadata: name: nodescaler
spec: replicas: 1 # only want one, to avoid race conditions template: metadata: labels: role: nodescaler-server spec: nodeSelector: role: apps strategy: type: Recreate containers: - name: nodescaler image: gcr.io/soccer/nodescaler env: - name: NODE_SELECTOR # the nodes to be managed value: "role=game-server" - name: CPU_REQUEST # how much CPU each server needs value: "0.1" - name: BUFFER_COUNT # how many servers do we need buffer for value: "30" - name: TICK # how often to tick over and recheck everything value: "10s" - name: MIN_NODE # minimum number of nodes for game servers value: "1" - name: MAX_NODE # maximum number of nodes for game servers value: "15"

You may have noticed that we set the deployment to have replicas: 1. We did this because  we always want to have only one instance of the node scaler active in our Kubernetes cluster at any given point in time. This ensures that we do not have more than one process attempting to scale up, and eventually scale down, our nodes within the cluster, which could definitely lead to race conditions and likely cause all kinds of weirdness. Similarly, to ensure that the node scaler is properly shut down before creating a new instance of it if we want to update the node scaler, we also configure strategy.type: Recreate so that Kubernetes will destroy the currently running node scaler Pod before recreating the newer version on updates, also avoiding any potential race conditions.

See it in Action

Once we have deployed our node scaler, let’s tail the logs and see it in action. In the video below, we see via the logs that when we have one node in the cluster assigned to game servers, we have capacity to potentially start forty dedicated game servers, and have configured a requirement of a buffer of 30 dedicated game servers. As we fill the available CPU capacity with running dedicated game servers via the matchmaker, pay attention to how the number of game servers that can be created in the remaining space drops and eventually, a new node is added to maintain the buffer!

[embedded content]

Next Steps

The fact that we can do this without having to build so much of the foundation is one of the things that gets me so excited about Kubernetes. While we touched on the Kubernetes client in the first post in this series, in this post we’ve really started to take advantage of it. This is what I feel the true power of Kubernetes really is – an integrated set of tools for running software over a large cluster, that you have a huge amount of control over. In this instance, we haven’t had to write code to spin up and spin down dedicated game servers in very specific ways – we could just leverage Pods. When we want to take control and react to events within the Kubernetes cluster itself, we have the Watch APIs that enable us to do just that! It’s quite amazing the core set of utility that Kubernetes gives you out of the box that many of us have been building ourselves for years and years. That all being said, scaling up nodes and game servers in our cluster is the comparatively easy part; scaling down is a trickier proposition. We’ll need to make sure nodes don’t have game servers on them before shutting them down, while also ensuring that game servers don’t end up widely fragmented across the cluster, but in the next post in this series we’ll look at how Kubernetes can also help in these areas as well! In the meantime, as with the previous posts – I welcome questions and comments here, or reach out to me via Twitter. You can see my presentation at GDC this year as well as check out the code in GitHub, which is still being actively worked on! All posts in this series:

  1. Containerising and Deploying
  2. Managing CPU and Memory
  3. Scaling Up Nodes
  4. Scaling Down Nodes (upcoming)
  5. Running Globally (upcoming)
Posted on Leave a comment

Mobile developer Supersolid nets $4M to expand London team

Mobile game developer Supersolid has netted $4 million in funding to ramp up production and expand its London team.

The studio has worked on a variety of titles including infinite runner, Super Penguin, and city-building RPG, Adventure Town

As reported by VentureBeat, the company’s growing roster has racked up over 50 million downloads to date. 

Supersolid CEO and co-founder Ed Chin believes the studio’s knack for creating engaging social and casual free-to-play titles is the secret to its success. 

“I think a big part of our success comes from the long experience we have in the team for developing engaging social and casual free-to-play games,” explained Chin. 

“Much of our senior team have worked together for many years prior to the founding of Supersolid, and together we created some of the most successful early free-to-play games, which happened to be on Facebook.

“The mobile games market has seen high saturation in particular game genres, but there remains a lot of opportunity in mastering new genres. We believe this to be especially the case for several under-served genres with a wider casual audience.”

Posted on Leave a comment

Riot co-founders returning to the front lines of game development

Riot co-founders Brandon Beck and Marc Merrill are leaving the world of management and heading back to the front lines of game development. 

Writing in a blog post, the pair said they’re clamoring for a return to the early days when they could spend their time thinking about how to turn League of Legends into a hit.

“When we founded Riot eleven years ago, we spent virtually every waking hour of the da thinking about how to make League of Legends as great of an experience as possible,” they wrote.

“As League started having success however, Riot Games grew from those humble beginnings where we could feed the whole team with a handful of pizzas to now having over 2,500 Rioters across 20 offices around the world.

“That growth had lots of benefits: our capabilities improved, our reach broadened, and we could deliver League of Legends and eSports to more players than ever before. But it also meant the majority of our time was allocated to ‘managing’ the company rather than creating incredible experiences for players.”

For the duo, it was an unfortunate side effect of League’s success, but now they’ll be winding back the clock and jumping back into the trenches for the company’s next chapter.  

In their absence, Riot president Nicolo Laurent, CFO Dylan Jadeja, and CTO Scott Gelb will work together to handle the day-to-day running of the studio.

Posted on Leave a comment

New Pokémon Ultra Sun and Pokémon Ultra Moon details revealed!

New Pokémon Ultra Sun and Pokémon Ultra Moon details revealed!

The Pokémon Company International and Nintendo today revealed new information about the mysterious Pokémon Necrozma, new Z-Moves and more for the upcoming games Pokémon Ultra Sun and Pokémon Ultra Moon.

After taking over the Legendary Pokémon Solgaleo in Pokémon Ultra Sun or the Legendary Pokémon Lunala in Pokémon Ultra Moon, Necrozma will become Dusk Mane Necrozma or Dawn Wings Necrozma!

Dusk Mane Necrozma
Type: Psychic/Steel
This form of Necrozma manifests when the Pokémon takes control of both the body and mind of the Legendary Pokémon Solgaleo, absorbing the light energy that pours out of it. It slices opponents with its strong claws on its four legs and it can propel itself forward by shooting black light from both sides of its chest.

Dawn Wings Necrozma
Type: Psychic/Ghost
This form of Necrozma manifests when the Pokémon takes control of the Legendary Pokémon Lunala, stealing its light energy by force. Dawn Wings Necrozma accelerates by shooting black light from its back. This Necrozma form shoots energy that glows darkly from the black parts of its wings.

Photon Geyser is a Psychic-type special move that only Necrozma can learn. This attack engulfs the target in a pillar of light and compares the user’s Attack and Sp. Atk stats, dealing damage to the opponent according to whichever is higher. Necrozma can also learn this move when it is in Dusk Mane form or Dawn Wings form. Alongside this new move, Solgaleo and Lunala will receive the exclusive Z-Moves Searing Sunraze Smash and Menacing Moonraze Maelstrom, respectively.

Searing Sunraze Smash is a new Steel-type Z-Move that can be used if you have a Solgaleo that knows Sunsteel Strike hold the exclusive Z-Crystal Solganium Z. This attack damages a target while ignoring any effects of the target’s Ability. Menacing Moonraze Maelstrom is a new Ghost-type Z-Move that can be used if you have a Lunala that knows Moongeist Beam hold the exclusive Z-Crystal Lunalium Z. As with Searing Sunraze Smash, this attack also damages a target while ignoring any effects of the target’s Ability. A Necrozma that knows Sunsteel Strike or Moongeist Beam and is holding the corresponding Z-Crystal will also be able to unleash Searing Sunraze Smash or Menacing Moonraze Maelstrom!

The Rotom Dex returns in Pokémon Ultra Sun and Pokémon Ultra Moon and has been powered up! Throughout your adventure, you will grow closer to the Rotom Dex the more you communicate with it, which will cause it to be more helpful. By deepening the bond with the Rotom Dex throughout the game, you will be able to get special items via the Roto Loto feature. These special items come in different varieties, with some increasing the Exp. Points that you receive for a set period of time, while others may make it easier to catch Pokémon. If you become close enough with the Rotom Dex, it will use a special power for you called Rotom’s Z-Power. This lets you use a second Z-Move in battle, even though normally players can only use one Z-Move per battle.

Pokémon Ultra Sun and Pokémon Ultra Moon will launch on November 17 2017, exclusively on the Nintendo 3DS family of systems. For more details about today’s announcement, please visit http://www.pokemon-sunmoon.com/ultra/en-us/

Game Rated:

Mild Cartoon Violence

Posted on Leave a comment

Yono and the Celestial Elephants lands on eShop today!

Yono and the Celestial Elephants lands on eShop today!

Many adventure games have a distinct lack of elephants … but that’s all about to change. Elephant Yono is about to arrive in a phantastical realm of Humans, Robots and the Undead. He will run, open chests with his trunk, head-butt bad guys, spray water and throw explosives.

But Yono is still so very young and in a kingdom inhabited by feudal Humans, undead Bonewights and robotic Mekani, it’s not easy to keep one’s trunk out of trouble.

Yono and the Celestial Elephant is a grand adventure with carefully designed puzzles, treasure hunts, a sprinkling combat and a world full of people. Play as a young elephant tasked to save a world he’s never seen before, and explore the rich history of a kingdom where humans, zombies and robots live side by side.

Yono and the Celestial Elephants is now available exclusively on Nintendo Switch today .

Game Rated:

Mild Fantasy Violence

Posted on Leave a comment

Shadow of War dev says to be a better lead, ‘make yourself obsolete’

Matthew Allen loves shaders.

“I love putting on the headphones and writing shaders,” he told Gamasutra recently, while chatting at an Xbox press event. “It scratches the technical itch and the artistic itch, because I get to make things look shit-hot by writing code.”

But there’s a problem: Allen loves shaders a bit too much. He most recently served as the technical art director on Monolith Productions’ Middle-earth: Shadow of War, and in the early stages of the game’s development, Allen created a bit of a bottleneck for the team by allowing himself to become “the shader guy.” 

He loved writing shaders, but couldn’t devote enough time to both them and his other duties to keep up with the pace of production. To solve the problem, he removed the bottleneck: himself.

“I realized, you know what, this is cool, I do it well and it’s awesome, but…I need to find a better way,” he said. “So I ended up finding a super awesome woman, just out of college, and she wanted to write shaders, so she basically became our shader queen. She’s written almost all of the shaders in the game.”

This is a practical, timely example of how Allen solved a production problem, something he believes game devs should constantly be looking to do if they want to be good leads — even when it means making yourself obsolete.

‘Make it so you come into work every day and you don’t have anything to do’

Hiring someone new onto the team helped dislodge Allen as a bottleneck, since shaders were someone else’s full-time responsibility now. That also meant he suddenly had a bit more free time in his work day, and to fill that time he went looking for something to solve.

“I moved on and started looking at what our problems would be,” he said.”One of the big ones we ended up tackling was all the facial animations for all of our orcs, and the 40,000+ lines of dialog, and how we were gonna animate all that with all the actors.”

So Allen and other folks on the Shadow of War team spent about a year “completely redoing” the animations for the game’s monstrous maws, revamping how many bones were in each face and how they move in response to the data they’re fed. Looking back now, Allen isn’t sure it would have happened if he’d kept happily writing shaders.

“The best thing you can do, for yourself and for us, is to make yourself obsolete. Make it so you come into work every day and you don’t have anything to do.”

“If I hadn’t said ‘alright this thing that I’m doing, that I love doing…I should move on,’ if I hadn’t thought through that, then all of that updated face tech, that improved pipeline….well, it probably would have existed, but the system was already sort of long in the tooth, and by focusing on it, I think we got it to a lot better place than it would have been otherwise.”

Allen shared this specific example from Shadow of War‘s development to illustrate a piece of game dev career advice he wants other devs to think about: always look for new problems. Someone told him that once, when he was feeling stuck, and it’s stuck with him ever since.

“I once got really great advice from Samantha Ryan, who used to run WB [Games], and before that was Monolith’s studio head,” said Allen. “She was my manager, and she said the best thing you can do, for yourself and for us, is to make yourself obsolete. Make it so you come into work every day and you don’t have anything to do.”

There’s a lot to unpack there, this notion of going to work every day and looking for ways to make your job extinct. Allen acknowledged that it can feel counter-intuitive, but suggested devs who want to be good leads think of it less as eliminating their value and more as solving problems.

“It seems very counter-intuitive; I was like, are you just gonna fire me then?” Allen recalled, with a laugh.

“And she’s like ‘no, because there’s a certain personality type that will always look for the next problem. And if you’re too busy focusing on existing problems, you’re not doing the thing that’s best. The best thing for you to do, would be to look for the next problem. Because when you look for those problems, and solve those problems, we as a company are better, and you are more valuable. So really her whole point was, shift responsibility onto folks. Which seemed counter-intuitive and weird, but I don’t know, I loved it, and I’ve lived by that ever since.”

Of course, many devs will feel like they’re in a position where they can’t shift responsibilities onto others — if they’re working alone or in a small team, for example, or doing remote contract work.

Allen acknowledged this as well, noting that his experiences are most relevant to devs at large studios but can also serve as a general reminder to try and solve problems permanently whenever possible. If you can figure out a permanent solution to something you regularly spend time working on, you can move on to solve bigger and better problems.

“Always look for new problems. Look to solve new problems, look for people to help you solve them. Being stuck someplace is really about not pressing through, sometimes,” added Allen. “Our industry is unique in that, the whole point of the job is constantly solving problems…so every day you can sort of learn something new, solve a new problem.”