04-30-2018, 09:59 PM
Blog: Scaling dedicated game servers with Kubernetes – Part 4
<div style="margin: 5px 5% 10px 5%;"><img src="http://www.sickgaming.net/blog/wp-content/uploads/2018/04/blog-scaling-dedicated-game-servers-with-kubernetes-part-4.gif" width="1260" height="638" title="" alt="" /></div><div><p><strong><em><small>The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.<br />The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.</small></em></strong></p>
<hr/>
<p>Originally posted on <a href="https://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-4-scaling-down/">compoundtheory.com</a>.</p>
<p><em>This is part four of a</em> <em><del>five</del></em><a href="http://www.compoundtheory.com/tag/scaling-dedicated-game-servers-with-kubernetes/"><em>four-part</em></a> <a href="http://www.compoundtheory.com/tag/scaling-dedicated-game-servers-with-kubernetes/"><em>series</em></a> <em>on scaling game servers with Kubernetes.</em></p>
<p>In the previous three posts, we <a href="https://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-1-containerising-and-deploying/">hosted our game servers on Kubernetes</a>, <a href="https://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-2-managing-cpu-and-memory/">measured and limited their resource usage</a>, and <a href="https://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-3-scaling-up-nodes/">scaled up the nodes</a> in our cluster based on that usage. Now we need to tackle the harder problem: scaling down the nodes in our cluster as resources are no longer being used, while ensuring that in-progress games are not interrupted when a node is deleted.</p>
<p>On the surface, scaling down nodes in our cluster may seem particularly complicated. Each game server has in-memory state of the current game and multiple game clients are connected to an individual game server playing a game. Deleting arbitrary nodes could potentially disconnect active players — and that tends to make them angry! Therefore, we can only remove nodes from a cluster when a node is empty of dedicated game servers.</p>
<p>This means that if you are running on <a href="http://cloud.google.com/gke">Google Kubernetes Engine</a> (GKE), or similar, you can’t use a managed autoscaling system. To <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#operating_criteria">quote the documentation</a> for the GKE autoscaler “Cluster autoscaler assumes that all replicated Pods can be restarted on some other node…” — which in our case is definitely not going to work, since it could easily delete nodes that have active players on them.</p>
<p>That being said, when looking at this situation more closely, we discover that we can break this down into three separate strategies that when combined together make scaling down a manageable problem that we can implement ourselves:</p>
<ol>
<li>Group game servers together to avoid fragmentation across the cluster</li>
<li>Cordon nodes when CPU capacity is above the configured buffer</li>
<li>Delete a cordoned node from the cluster once all the games on the node have exited</li>
</ol>
<p>Let’s look at each of these detail.</p>
<h3>Grouping Game Servers Together in the Cluster</h3>
<p>We want to avoid fragmentation of game servers across the cluster so we don’t end up with a wayward small set of game servers still running across multiple nodes, which will prevent those nodes from being shut down and reclaiming their resources.</p>
<p>This means we don’t want a scheduling pattern that creates game server Pods on random nodes across our cluster like this:</p>
<p><img alt="Fragmented across the cluster" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/04/blog-scaling-dedicated-game-servers-with-kubernetes-part-4.gif"/></p>
<p>But instead want to have our game server Pods scheduled packed as tight as possible like this:</p>
<p><img alt="Fragmented across the cluster" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/04/blog-scaling-dedicated-game-servers-with-kubernetes-part-4-1.gif"/></p>
<p>To group our game servers together, we can take advantage of Kubernetes Pod <code>PodAffinity</code> configuration with the <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature"><code>PreferredDuringSchedulingIgnoredDuringExecution</code></a> option. This gives us the ability to tell Pods that we prefer to group them by the hostname of the node that they are currently on, which essentially means that Kubernetes will prefer to put a dedicated game server Pod on a node that already has a dedicated game server Pod on it already.</p>
<p>In an ideal world, we would want a dedicated game server Pod to be scheduled on the node with the most dedicated game server Pods, as long as that node also has enough spare CPU resources. We could definitely do this if we wanted to write our own <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/">custom scheduler for Kubernetes</a>, but to keep this demo simple, we will stick with the <code>PodAffinity</code> solution. That being said, when we consider the short length of our games, and that we will be adding (and explaining) cordoning nodes shortly, this combination of techniques is good enough for our requirements, and removes the need for us to write additional complex code.</p>
<p>When we add the <code>PodAffinity</code> configuration to the previous post’s configuration, we end up with the following, which tells Kubernetes to put pods with the labels <code>sessions: game</code> on the same node as each other whenever possible.</p>
<pre title="pod.yaml">
apiVersion: v1
kind: Pod
metadata: generateName: "game-"
spec: hostNetwork: true restartPolicy: Never nodeSelector: role: game-server containers: - name: soccer-server image: gcr.io/soccer/soccer-server:0.1 env: - name: SESSION_NAME valueFrom: fieldRef: fieldPath: metadata.name resources: limits: cpu: "0.1" affinity: podAffinity: # group game server Pods preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: sessions: game topologyKey: kubernetes.io/hostname
</pre>
<h3>Cordoning Nodes</h3>
<p>Now that we have our game servers relatively well packed together in the cluster, we can discuss “cordoning nodes”. What does cordoning nodes really mean? Very simply, Kubernetes gives us the ability to tell the scheduler: “Hey scheduler, don’t schedule anything new on this node here”. This ensures that no new Pods get scheduled on that node. In fact, in some places in the Kubernetes documentation, this is simply referred to as <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration">marking a node unschedulable</a>.</p>
<p><img alt="Cordoning nodes" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/04/blog-scaling-dedicated-game-servers-with-kubernetes-part-4-2.gif"/></p>
<p>In the code below, if you focus on the section <code>s.bufferCount < available</code> you will see that we make a request to cordon nodes if the amount of CPU buffer we currently have is greater than what we have set as our need. We’ve stripped some parts out for brevity, but you can see the <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/scaler.go#L40">original here</a>.</p>
<pre title="scaler.go">
// scale scales nodes up and down, depending on CPU constraints
// this includes adding nodes, cordoning them as well as deleting them
func (s Server) scaleNodes() error { nl, err := s.newNodeList() if err != nil { return err } available := nl.cpuRequestsAvailable() if available < s.bufferCount { finished, err := s.uncordonNodes(nl, s.bufferCount-available) // short circuit if uncordoning means we have enough buffer now if err != nil || finished { return err } nl, err := s.newNodeList() if err != nil { return err } // recalculate available = nl.cpuRequestsAvailable() err = s.increaseNodes(nl, s.bufferCount-available) if err != nil { return err } } else if s.bufferCount < available { err := s.cordonNodes(nl, available-s.bufferCount) if err != nil { return err } } return s.deleteCordonedNodes()
}
</pre>
<p>As you can also see from the code above, we can <em>uncorden</em> any available cordoned nodes in the cluster if we drop below the configured CPU buffer. This is faster than adding a whole new node, so it’s important to check for cordoned nodes before adding a whole new node from scratch. Because of this we also have a configured delay on how long before a cordoned node is deleted (you can see the <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/scaler.go#L222">source here</a>) to limit thrashing on creating and deleting nodes in the cluster unnecessarily.</p>
<p>This is a pretty great start. However, when we want to cordon nodes, we want to cordon only the nodes that have the least number of game server Pods on them, as in this instance, they are most likely to empty first as game sessions come to an end.</p>
<p>Thanks to the Kubernetes API, it’s relatively straightforward to count the number of game server Pods on each Node, and sort them in ascending order. From there we can do arithmetic to determine if we still remain above the desired CPU buffer if we cordon each of the available nodes. If so, we can safely <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/scaler.go#L159">cordon those nodes</a>.</p>
<pre title="scaler.go">
// cordonNodes decrease the number of available nodes by the given number of cpu blocks (but not over),
// but cordoning those nodes that have the least number of games currently on them
func (s Server) cordonNodes(nl *nodeList, gameNumber int64) error { // … removed some input validation ... // how many nodes (n) do we have to delete such that we are cordoning no more // than the gameNumber capacity := nl.nodes.Items[0].Status.Capacity[v1.ResourceCPU] //assuming all nodes are the same cpuRequest := gameNumber * s.cpuRequest diff := int64(math.Floor(float64(cpuRequest) / float64(capacity.MilliValue()))) if diff <= 0 { log.Print("[Info][CordonNodes] No nodes to be cordoned.") return nil } log.Printf("[Info][CordonNodes] Cordoning %v nodes", diff) // sort the nodes, such that the one with the least number of games are first nodes := nl.nodes.Items sort.Slice(nodes, func(i, j int) bool { return len(nl.nodePods(nodes[i]).Items) < len(nl.nodePods(nodes[j]).Items) }) // grab the first n number of them cNodes := nodes[0:diff] // cordon them all for _, n := range cNodes { log.Printf("[Info][CordonNodes] Cordoning node: %v", n.Name) err := s.cordon(&n, true) if err != nil { return err } } return nil
}
</pre>
<h3>Removing Nodes from the Cluster</h3>
<p>Now that we have nodes in our clusters being cordoned, it is just a matter of waiting until the cordoned node is empty of game server Pods before deleting it. The code below also makes sure the node count never drops below a configured minimum as a nice baseline for capacity within our cluster.</p>
<p>You can see this in the code below, and in the <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/scaler.go#L201">original context</a>:</p>
<pre title="scaler.go">
// deleteCordonedNodes will delete a cordoned node if it
// the time since it was cordoned has expired
func (s Server) deleteCordonedNodes() error { nl, err := s.newNodeList() if err != nil { return err } l := int64(len(nl.nodes.Items)) if l <= s.minNodeNumber { log.Print("[Info][deleteCordonedNodes] Already at minimum node count. exiting") return nil } var dn []v1.Node for _, n := range nl.cordonedNodes() { ct, err := cordonTimestamp(n) if err != nil { return err } pl := nl.nodePods(n) // if no game session pods && if they have passed expiry, then delete them if len(filterGameSessionPods(pl.Items)) == 0 && ct.Add(s.shutdown).Before(s.clock.Now()) { err := s.cs.CoreV1().Nodes().Delete(n.Name, nil) if err != nil { return errors.Wrapf(err, "Error deleting cordoned node: %v", n.Name) } dn = append(dn, n) // don't delete more nodes than the minimum number set if l--; l <= s.minNodeNumber { break } } } return s.nodePool.DeleteNodes(dn)
}
</pre>
<h3>Conclusion</h3>
<p>We’ve successfully containerised our game servers, scaled them up as demand increases, and now scaled our Kubernetes cluster down, so we don’t have to pay for underutilised machinery — all powered by the APIs and capabilities that Kubernetes makes available out of the box. While it would take more work to turn this into a production level system, you can already see how to take advantage of the many building blocks available to you.</p>
<p>Before we finish, I would like to apologise for the delay in producing the fourth part in this series. If you saw <a href="https://cloudplatform.googleblog.com/2018/03/introducing-Agones-open-source-multiplayer-dedicated-game-server-hosting-built-on-Kubernetes.html">the announcement</a>, you may have guessed that a lot of my time got taken up developing and releasing <a href="https://agones.dev">Agones</a>, the open source, productised version of this series of posts on running game servers on Kubernetes.</p>
<p>On that note, this will also be the last installment in this series. I had already completed the work to implement scaling down before starting on Agones, and rather than build out new functionality for global cluster management on <a href="https://github.com/markmandel/paddle-soccer">Paddle Soccer</a>, I’m going to focus those efforts building out awesome new features for Agones and bring it up from its current <a href="https://github.com/GoogleCloudPlatform/agones/releases/tag/v0.1">0.1 alpha release</a>, to a full 1.0, production-ready milestone.</p>
<p>I’m very excited about the future of Agones, and if my series of blog posts have inspired you, <a href="https://github.com/GoogleCloudPlatform/agones">watch the GitHub repository</a>, <a href="https://join.slack.com/t/agones/shared_invite/enQtMzE5NTE0NzkyOTk1LWQ2ZmY1Mjc4ZDQ4NDJhOGYxYTY2NTY0NjUwNjliYzVhMWFjYjMxM2RlMjg3NGU0M2E0YTYzNDIxNDMyZGNjMjU">join the Slack</a>, <a href="https://twitter.com/agonesdev">follow us on Twitter</a> and <a href="https://groups.google.com/forum/#!forum/agones-discuss">get involved the mailing list</a>. We’re actively seeking more contributors, and would love to have you involved.</p>
<p>Lastly, I welcome questions and comments here, or <a href="https://twitter.com/neurotic">reach out to me via Twitter</a>. You can also see my <a href="http://www.gdcvault.com/play/1024328/">presentation at GDC</a> and <a href="https://www.youtube.com/watch?v=a08WrvIPKMw">GCAP</a> from 2017 on this topic, as well as check out the code <a href="https://github.com/markmandel/paddle-soccer">in GitHub</a>.</p>
<p>All posts in this series:</p>
<ol>
<li><a href="http://gamasutra.com/blogs/MarkMandel/20170502/297222/Scaling_Dedicated_Game_Servers_with_Kubernetes_Part_1__Containerising_and_Deploying.php">Containerising and Deploying</a></li>
<li><a href="http://www.gamasutra.com/blogs/MarkMandel/20170713/301596/Scaling_Dedicated_Game_Servers_with_Kubernetes_Part_2__Managing_CPU_and_Memory.php">Managing CPU and Memory</a></li>
<li><a href="https://www.gamasutra.com/blogs/MarkMandel/20171011/307354/Scaling_Dedicated_Game_Servers_with_Kubernetes_Part_3__Scaling_Up_Nodes.php">Scaling Up Nodes</a></li>
<li><strong>Scaling Down Nodes</strong></li>
<li><s>Running Globally</s></li>
</ol>
</div>
<div style="margin: 5px 5% 10px 5%;"><img src="http://www.sickgaming.net/blog/wp-content/uploads/2018/04/blog-scaling-dedicated-game-servers-with-kubernetes-part-4.gif" width="1260" height="638" title="" alt="" /></div><div><p><strong><em><small>The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.<br />The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.</small></em></strong></p>
<hr/>
<p>Originally posted on <a href="https://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-4-scaling-down/">compoundtheory.com</a>.</p>
<p><em>This is part four of a</em> <em><del>five</del></em><a href="http://www.compoundtheory.com/tag/scaling-dedicated-game-servers-with-kubernetes/"><em>four-part</em></a> <a href="http://www.compoundtheory.com/tag/scaling-dedicated-game-servers-with-kubernetes/"><em>series</em></a> <em>on scaling game servers with Kubernetes.</em></p>
<p>In the previous three posts, we <a href="https://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-1-containerising-and-deploying/">hosted our game servers on Kubernetes</a>, <a href="https://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-2-managing-cpu-and-memory/">measured and limited their resource usage</a>, and <a href="https://www.compoundtheory.com/scaling-dedicated-game-servers-with-kubernetes-part-3-scaling-up-nodes/">scaled up the nodes</a> in our cluster based on that usage. Now we need to tackle the harder problem: scaling down the nodes in our cluster as resources are no longer being used, while ensuring that in-progress games are not interrupted when a node is deleted.</p>
<p>On the surface, scaling down nodes in our cluster may seem particularly complicated. Each game server has in-memory state of the current game and multiple game clients are connected to an individual game server playing a game. Deleting arbitrary nodes could potentially disconnect active players — and that tends to make them angry! Therefore, we can only remove nodes from a cluster when a node is empty of dedicated game servers.</p>
<p>This means that if you are running on <a href="http://cloud.google.com/gke">Google Kubernetes Engine</a> (GKE), or similar, you can’t use a managed autoscaling system. To <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#operating_criteria">quote the documentation</a> for the GKE autoscaler “Cluster autoscaler assumes that all replicated Pods can be restarted on some other node…” — which in our case is definitely not going to work, since it could easily delete nodes that have active players on them.</p>
<p>That being said, when looking at this situation more closely, we discover that we can break this down into three separate strategies that when combined together make scaling down a manageable problem that we can implement ourselves:</p>
<ol>
<li>Group game servers together to avoid fragmentation across the cluster</li>
<li>Cordon nodes when CPU capacity is above the configured buffer</li>
<li>Delete a cordoned node from the cluster once all the games on the node have exited</li>
</ol>
<p>Let’s look at each of these detail.</p>
<h3>Grouping Game Servers Together in the Cluster</h3>
<p>We want to avoid fragmentation of game servers across the cluster so we don’t end up with a wayward small set of game servers still running across multiple nodes, which will prevent those nodes from being shut down and reclaiming their resources.</p>
<p>This means we don’t want a scheduling pattern that creates game server Pods on random nodes across our cluster like this:</p>
<p><img alt="Fragmented across the cluster" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/04/blog-scaling-dedicated-game-servers-with-kubernetes-part-4.gif"/></p>
<p>But instead want to have our game server Pods scheduled packed as tight as possible like this:</p>
<p><img alt="Fragmented across the cluster" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/04/blog-scaling-dedicated-game-servers-with-kubernetes-part-4-1.gif"/></p>
<p>To group our game servers together, we can take advantage of Kubernetes Pod <code>PodAffinity</code> configuration with the <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature"><code>PreferredDuringSchedulingIgnoredDuringExecution</code></a> option. This gives us the ability to tell Pods that we prefer to group them by the hostname of the node that they are currently on, which essentially means that Kubernetes will prefer to put a dedicated game server Pod on a node that already has a dedicated game server Pod on it already.</p>
<p>In an ideal world, we would want a dedicated game server Pod to be scheduled on the node with the most dedicated game server Pods, as long as that node also has enough spare CPU resources. We could definitely do this if we wanted to write our own <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/">custom scheduler for Kubernetes</a>, but to keep this demo simple, we will stick with the <code>PodAffinity</code> solution. That being said, when we consider the short length of our games, and that we will be adding (and explaining) cordoning nodes shortly, this combination of techniques is good enough for our requirements, and removes the need for us to write additional complex code.</p>
<p>When we add the <code>PodAffinity</code> configuration to the previous post’s configuration, we end up with the following, which tells Kubernetes to put pods with the labels <code>sessions: game</code> on the same node as each other whenever possible.</p>
<pre title="pod.yaml">
apiVersion: v1
kind: Pod
metadata: generateName: "game-"
spec: hostNetwork: true restartPolicy: Never nodeSelector: role: game-server containers: - name: soccer-server image: gcr.io/soccer/soccer-server:0.1 env: - name: SESSION_NAME valueFrom: fieldRef: fieldPath: metadata.name resources: limits: cpu: "0.1" affinity: podAffinity: # group game server Pods preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: sessions: game topologyKey: kubernetes.io/hostname
</pre>
<h3>Cordoning Nodes</h3>
<p>Now that we have our game servers relatively well packed together in the cluster, we can discuss “cordoning nodes”. What does cordoning nodes really mean? Very simply, Kubernetes gives us the ability to tell the scheduler: “Hey scheduler, don’t schedule anything new on this node here”. This ensures that no new Pods get scheduled on that node. In fact, in some places in the Kubernetes documentation, this is simply referred to as <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration">marking a node unschedulable</a>.</p>
<p><img alt="Cordoning nodes" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/04/blog-scaling-dedicated-game-servers-with-kubernetes-part-4-2.gif"/></p>
<p>In the code below, if you focus on the section <code>s.bufferCount < available</code> you will see that we make a request to cordon nodes if the amount of CPU buffer we currently have is greater than what we have set as our need. We’ve stripped some parts out for brevity, but you can see the <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/scaler.go#L40">original here</a>.</p>
<pre title="scaler.go">
// scale scales nodes up and down, depending on CPU constraints
// this includes adding nodes, cordoning them as well as deleting them
func (s Server) scaleNodes() error { nl, err := s.newNodeList() if err != nil { return err } available := nl.cpuRequestsAvailable() if available < s.bufferCount { finished, err := s.uncordonNodes(nl, s.bufferCount-available) // short circuit if uncordoning means we have enough buffer now if err != nil || finished { return err } nl, err := s.newNodeList() if err != nil { return err } // recalculate available = nl.cpuRequestsAvailable() err = s.increaseNodes(nl, s.bufferCount-available) if err != nil { return err } } else if s.bufferCount < available { err := s.cordonNodes(nl, available-s.bufferCount) if err != nil { return err } } return s.deleteCordonedNodes()
}
</pre>
<p>As you can also see from the code above, we can <em>uncorden</em> any available cordoned nodes in the cluster if we drop below the configured CPU buffer. This is faster than adding a whole new node, so it’s important to check for cordoned nodes before adding a whole new node from scratch. Because of this we also have a configured delay on how long before a cordoned node is deleted (you can see the <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/scaler.go#L222">source here</a>) to limit thrashing on creating and deleting nodes in the cluster unnecessarily.</p>
<p>This is a pretty great start. However, when we want to cordon nodes, we want to cordon only the nodes that have the least number of game server Pods on them, as in this instance, they are most likely to empty first as game sessions come to an end.</p>
<p>Thanks to the Kubernetes API, it’s relatively straightforward to count the number of game server Pods on each Node, and sort them in ascending order. From there we can do arithmetic to determine if we still remain above the desired CPU buffer if we cordon each of the available nodes. If so, we can safely <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/scaler.go#L159">cordon those nodes</a>.</p>
<pre title="scaler.go">
// cordonNodes decrease the number of available nodes by the given number of cpu blocks (but not over),
// but cordoning those nodes that have the least number of games currently on them
func (s Server) cordonNodes(nl *nodeList, gameNumber int64) error { // … removed some input validation ... // how many nodes (n) do we have to delete such that we are cordoning no more // than the gameNumber capacity := nl.nodes.Items[0].Status.Capacity[v1.ResourceCPU] //assuming all nodes are the same cpuRequest := gameNumber * s.cpuRequest diff := int64(math.Floor(float64(cpuRequest) / float64(capacity.MilliValue()))) if diff <= 0 { log.Print("[Info][CordonNodes] No nodes to be cordoned.") return nil } log.Printf("[Info][CordonNodes] Cordoning %v nodes", diff) // sort the nodes, such that the one with the least number of games are first nodes := nl.nodes.Items sort.Slice(nodes, func(i, j int) bool { return len(nl.nodePods(nodes[i]).Items) < len(nl.nodePods(nodes[j]).Items) }) // grab the first n number of them cNodes := nodes[0:diff] // cordon them all for _, n := range cNodes { log.Printf("[Info][CordonNodes] Cordoning node: %v", n.Name) err := s.cordon(&n, true) if err != nil { return err } } return nil
}
</pre>
<h3>Removing Nodes from the Cluster</h3>
<p>Now that we have nodes in our clusters being cordoned, it is just a matter of waiting until the cordoned node is empty of game server Pods before deleting it. The code below also makes sure the node count never drops below a configured minimum as a nice baseline for capacity within our cluster.</p>
<p>You can see this in the code below, and in the <a href="https://github.com/markmandel/paddle-soccer/blob/master/server/nodescaler/scaler.go#L201">original context</a>:</p>
<pre title="scaler.go">
// deleteCordonedNodes will delete a cordoned node if it
// the time since it was cordoned has expired
func (s Server) deleteCordonedNodes() error { nl, err := s.newNodeList() if err != nil { return err } l := int64(len(nl.nodes.Items)) if l <= s.minNodeNumber { log.Print("[Info][deleteCordonedNodes] Already at minimum node count. exiting") return nil } var dn []v1.Node for _, n := range nl.cordonedNodes() { ct, err := cordonTimestamp(n) if err != nil { return err } pl := nl.nodePods(n) // if no game session pods && if they have passed expiry, then delete them if len(filterGameSessionPods(pl.Items)) == 0 && ct.Add(s.shutdown).Before(s.clock.Now()) { err := s.cs.CoreV1().Nodes().Delete(n.Name, nil) if err != nil { return errors.Wrapf(err, "Error deleting cordoned node: %v", n.Name) } dn = append(dn, n) // don't delete more nodes than the minimum number set if l--; l <= s.minNodeNumber { break } } } return s.nodePool.DeleteNodes(dn)
}
</pre>
<h3>Conclusion</h3>
<p>We’ve successfully containerised our game servers, scaled them up as demand increases, and now scaled our Kubernetes cluster down, so we don’t have to pay for underutilised machinery — all powered by the APIs and capabilities that Kubernetes makes available out of the box. While it would take more work to turn this into a production level system, you can already see how to take advantage of the many building blocks available to you.</p>
<p>Before we finish, I would like to apologise for the delay in producing the fourth part in this series. If you saw <a href="https://cloudplatform.googleblog.com/2018/03/introducing-Agones-open-source-multiplayer-dedicated-game-server-hosting-built-on-Kubernetes.html">the announcement</a>, you may have guessed that a lot of my time got taken up developing and releasing <a href="https://agones.dev">Agones</a>, the open source, productised version of this series of posts on running game servers on Kubernetes.</p>
<p>On that note, this will also be the last installment in this series. I had already completed the work to implement scaling down before starting on Agones, and rather than build out new functionality for global cluster management on <a href="https://github.com/markmandel/paddle-soccer">Paddle Soccer</a>, I’m going to focus those efforts building out awesome new features for Agones and bring it up from its current <a href="https://github.com/GoogleCloudPlatform/agones/releases/tag/v0.1">0.1 alpha release</a>, to a full 1.0, production-ready milestone.</p>
<p>I’m very excited about the future of Agones, and if my series of blog posts have inspired you, <a href="https://github.com/GoogleCloudPlatform/agones">watch the GitHub repository</a>, <a href="https://join.slack.com/t/agones/shared_invite/enQtMzE5NTE0NzkyOTk1LWQ2ZmY1Mjc4ZDQ4NDJhOGYxYTY2NTY0NjUwNjliYzVhMWFjYjMxM2RlMjg3NGU0M2E0YTYzNDIxNDMyZGNjMjU">join the Slack</a>, <a href="https://twitter.com/agonesdev">follow us on Twitter</a> and <a href="https://groups.google.com/forum/#!forum/agones-discuss">get involved the mailing list</a>. We’re actively seeking more contributors, and would love to have you involved.</p>
<p>Lastly, I welcome questions and comments here, or <a href="https://twitter.com/neurotic">reach out to me via Twitter</a>. You can also see my <a href="http://www.gdcvault.com/play/1024328/">presentation at GDC</a> and <a href="https://www.youtube.com/watch?v=a08WrvIPKMw">GCAP</a> from 2017 on this topic, as well as check out the code <a href="https://github.com/markmandel/paddle-soccer">in GitHub</a>.</p>
<p>All posts in this series:</p>
<ol>
<li><a href="http://gamasutra.com/blogs/MarkMandel/20170502/297222/Scaling_Dedicated_Game_Servers_with_Kubernetes_Part_1__Containerising_and_Deploying.php">Containerising and Deploying</a></li>
<li><a href="http://www.gamasutra.com/blogs/MarkMandel/20170713/301596/Scaling_Dedicated_Game_Servers_with_Kubernetes_Part_2__Managing_CPU_and_Memory.php">Managing CPU and Memory</a></li>
<li><a href="https://www.gamasutra.com/blogs/MarkMandel/20171011/307354/Scaling_Dedicated_Game_Servers_with_Kubernetes_Part_3__Scaling_Up_Nodes.php">Scaling Up Nodes</a></li>
<li><strong>Scaling Down Nodes</strong></li>
<li><s>Running Globally</s></li>
</ol>
</div>