GeekGold Bonus for All Supporters at year's end: 1000!
19 Days Left
Over the past few years, I've been working on revamping the Titan HD AI. The current AI architecture has some weaknesses that seem too entrenched to fix with minor changes. Among the issues are:
- The AI does not understand that a few strong legions is better than a large number of weak legions.
- The AI does not look far enough ahead in battle to understand when it is putting the Titan at too much risk.
- The AI does not look far enough ahead in battle to understand that an attacker who fails to move will eventually suffer a time loss.
- The AI does not look far enough ahead in battle to understand that it's often not worth giving up a positional advantage to get a temporary gang-up on opposing units.
The AI has other flaws as well, but they tend to be more subtle; the problems above are clear even to novices, which makes the strongest AI vulnerable to exploitation.
For about a year, I worked on introducing to Titan the technique used in then-state-of-the-art Go playing AIs: MCTS. MCTS is very powerful in Go, but it appears to have some limitations that Titan is particularly fond of triggering. In particular, MCTS does not appear to do well in situations where the branch-factor of moves varies drastically between peer nodes of the game tree. Go does not have this characteristic; the number of moves available tends to shrink monotonically. Titan, on the other hand, has large branch-factors during movement on the Masterboard and Battlelands, followed by relatively narrow branch-factors during mustering and strike phases, which gives MCTS difficulty in comparing the results from continuing to move units against the result of moving to the next phase.
These limitations can be overcome with a sufficiently intelligent heuristic to help prune unlikely branches of the search tree. However, the heuristics that I had been using to guide the previous, combinatoric-search method were crafted for each phase of the game (maneuver, strike, etc) and turned out not to be well suited for use in a more generic game-tree crawl process like MCTS.
I had been exploring completely revamped heuristics when, late last year, I heard of newer techniques using neural networks to guide MCTS searches. A few months later, those techniques gained national attention as Google's DeepMind team announced a new Go AI that had defeated a European champion. This past April, that AI defeated world champion Lee Sedol 4-1 in a series that stunned both the Go world and the world of AI development.
I'm currently in the process of trying to replicate some level of that success. While I don't have any ambition of developing an AI that can defeat world-champion Titan players, I am attracted to the NN/MCTS combination as it promises to produce somewhat more human-like behaviors: its flaws will hopefully be more like real peoples' mistakes, and less obviously a limitation of an artificial model of the game.
The process of developing a neural network for a complex game like Titan is challenging. I'm aided by a growing catalog of human games that are recorded on Titan HD's servers, but I don't have DeepMind's computational resources (they trained deep neural networks for weeks on systems that I'm guessing had thousands of CPUs). Hopefully, my more modest ambitions match my more modest resources.
I don't yet have very many concrete results to share. I have, however, done at least a proof of concept that a neural network trained to estimate battle outcomes can give higher quality estimates than my current hand-tuned heuristic.
I'll share any further developments as they occur.