Papers by Nicolas Barriga

A commonly used technique for managing AI complexity in real-time strategy (RTS) games is to use ... more A commonly used technique for managing AI complexity in real-time strategy (RTS) games is to use action and/or state abstractions. High-level abstractions can often lead to good strategic decision making, but tactical decision quality may suffer due to lost details. A competing method is to sample the search space which often leads to good tactical performance in simple scenarios, but poor high-level planning. We propose to use a deep convolutional neural network (CNN) to select among a limited set of abstract action choices, and to utilize the remaining computation time for game tree search to improve low level tactics. The CNN is trained by supervised learning on game states labelled by Puppet Search, a strategic search algorithm that uses action abstractions. The network is then used to select a script — an abstract action — to produce low level actions for all units. Subsequently, the game tree search algorithm improves the tactical actions of a subset of units using a limited view of the game state only considering units close to opponent units. Experiments in the µRTS game show that the combined algorithm results in higher win-rates than either of its two independent components and other state-of-the-art µRTS agents. To the best of our knowledge, this is the first successful application of a convolutional network to play a full RTS game on standard game maps, as previous work has focused on sub-problems, such as combat, or on very small maps.

Significant progress has been made in recent years towards stronger Real-Time Strategy (RTS) game... more Significant progress has been made in recent years towards stronger Real-Time Strategy (RTS) game playing agents. Some of the latest approaches have focused on enhancing standard game tree search techniques with a smart sampling of the search space, or on directly reducing this search space. However, experiments have thus far only been performed using small scenarios. We provide experimental results on the performance of these agents on increasingly larger scenarios. Our main contribution is Puppet Search, a new adversarial search framework that reduces the search space by using scripts that can expose choice points to a look-ahead search procedure. Selecting a combination of a script and decisions for its choice points represents an abstract move to be applied next. Such moves can be directly executed in the actual game, or in an abstract representation of the game state which can be used by an adversarial tree search algorithm. We tested Puppet Search in µRTS, an abstract RTS game popular within the research community, allowing us to directly compare our algorithm against state-of-the-art agents published in the last few years. We show a similar performance to other scripted and search based agents on smaller scenarios, while outperforming them on larger ones.

—Real-time strategy (RTS) games, such as Blizzard's StarCraft, are fast paced war simulation game... more —Real-time strategy (RTS) games, such as Blizzard's StarCraft, are fast paced war simulation games in which players have to manage economies, control many dozens of units, and deal with uncertainty about opposing unit locations in real-time. Even in perfect information settings, constructing strong AI systems has been difficult due to enormous state and action spaces and the lack of good state evaluation functions and high-level action abstractions. To this day, good human players are still handily defeating the best RTS game AI systems, but this may change in the near future given the recent success of deep convolutional neural networks (CNNs) in computer Go, which demonstrated how networks can be used for evaluating complex game states accurately and to focus look-ahead search. In this paper we present a CNN for RTS game state evaluation that goes beyond commonly used material based evaluations by also taking spatial relations between units into account. We evaluate the CNN's performance by comparing it with various other evaluation functions by means of tournaments played by several state-of-the-art search algorithms. We find that, despite its much slower evaluation speed, the CNN based search performs significantly better compared to simpler but faster evaluations. These promising initial results together with recent advances in hierarchical search suggest that dominating human players in RTS games may not be far off.

Smart decision making at the tactical level is important for Artificial Intelligence (AI) agents ... more Smart decision making at the tactical level is important for Artificial Intelligence (AI) agents to perform well in the domain of real-time strategy (RTS) games. Winning battles is crucial in RTS games, and while humans can decide when and how to attack based on their experience, it is challenging for AI agents to estimate combat outcomes accurately. A few existing models address this problem in the game of StarCraft but present many restrictions, such as not modeling injured units, supporting only a small number of unit types, or being able to predict the winner of a fight but not the remaining army. Prediction using simulations is a popular method, but generally slow and requires extensive coding to model the game engine accurately. This paper introduces a model based on Lanchester's attrition laws which addresses the mentioned limitations while being faster than running simulations. Unit strength values are learned using maximum likelihood estimation from past recorded battles. We present experiments that use a StarCraft simulator for generating battles for both training and testing, and show that the model is capable of making accurate predictions. Furthermore, we implemented our method in a Star-Craft bot that uses either this or traditional simulations to decide when to attack or to retreat. We present tournament results (against top bots from 2014 AIIDE competition) comparing the performances of the two versions, and show increased winning percentages for our method.
Real-Time Strategy (RTS) video games have proven to be a very challenging application area for Ar... more Real-Time Strategy (RTS) video games have proven to be a very challenging application area for Artificial Intelligence research. Existing AI solutions are limited by vast state and action spaces and real-time constraints. Most implementations efficiently tackle various tactical or strategic sub-problems, but there is no single algorithm fast enough to be successfully applied to full RTS games. This paper introduces a hierarchical adversarial search framework which implements a different abstraction at each level — from deciding how to win the game at the top of the hierarchy to individual unit orders at the bottom.

Real-Time Strategy (RTS) video games have proven to be a very challenging application area for ar... more Real-Time Strategy (RTS) video games have proven to be a very challenging application area for artificial intelligence research. Existing AI solutions are limited by vast state and action spaces and real-time constraints. Most implementations efficiently tackle various tactical or strategic sub-problems, but there is no single algorithm fast enough to be successfully applied to big problem sets (such as a complete instance of the StarCraft RTS game). This paper presents a hierarchical adversarial search framework which more closely models the human way of thinking — much like the chain of command employed by the military. Each level implements a different abstraction — from deciding how to win the game at the top of the hierarchy to individual unit orders at the bottom. We apply a 3-layer version of our model to SparCraft — a Star-Craft combat simulator — and show that it outperforms state of the art algorithms such as Alpha-Beta, UCT, and Portfolio Search in large combat scenarios featuring multiple bases and up to 72 mobile units per player under real-time constraints of 40 ms per search episode.
In this paper we propose using a Genetic Algorithm to optimize the placement of buildings in Real... more In this paper we propose using a Genetic Algorithm to optimize the placement of buildings in Real-Time Strategy games. Candidate solutions are evaluated by running base assault simulations. We present experimental results in Spar-Craft — a StarCraft combat simulator — using battle setups extracted from human and bot StarCraft games. We show that our system is able to turn base assaults that are losses for the defenders into wins, as well as reduce the number of surviving attackers. Performance is heavily dependent on the quality of the prediction of the attacker army composition used for training, and its similarity to the army used for evaluation. These results apply to both human and bot games.

Smart decision making at the tactical level is important for Artificial Intelligence (AI) agents ... more Smart decision making at the tactical level is important for Artificial Intelligence (AI) agents to perform well in the domain of real-time strategy (RTS) games. Winning battles is crucial in RTS games, and while ahumans can decide when and how to attack based on their experience, it is challenging for AI agents to estimate combat outcomes accurately. A few existing models address this problem in the game of StarCraft but present many restrictions, such as not modeling injured units, supporting only a small number of unit types, or being able to predict the winner of a fight but not the remaining army. Prediction using simulations is a popular method, but generally slow and requires extensive coding to model the game engine accurately. This paper introduces a model based on Lanchester's attrition laws which addresses the mentioned limitations while being faster than running simulations. Unit strength values are learned using maximum likelihood estimation from past recorded battles. We present experiments that use a StarCraft simulator for generating battles for both training and testing, and show that the model is capable of making accurate predictions. Furthermore, we implemented our method in a Star-Craft bot that uses either this or traditional simulations to decide when to attack or to retreat. We present tournament results (against top bots from 2014 AIIDE competition) comparing the performances of the two versions, and show increased winning percentages for our method.

Real-Time Strategy (RTS) games have shown to be very resilient to standard adversarial tree searc... more Real-Time Strategy (RTS) games have shown to be very resilient to standard adversarial tree search techniques. Recently , a few approaches to tackle their complexity have emerged that use game state or move abstractions, or both. Unfortunately, the supporting experiments were either limited to simpler RTS environments (µRTS, SparCraft) or lack testing against state-of-the-art game playing agents. Here, we propose Puppet Search, a new adversarial search framework based on scripts that can expose choice points to a look-ahead search procedure. Selecting a combination of a script and decisions for its choice points represents a move to be applied next. Such moves can be executed in the actual game, thus letting the script play, or in an abstract representation of the game state which can be used by an adversarial tree search algorithm. Puppet Search returns a principal variation of scripts and choices to be executed by the agent for a given time span. We implemented the algorithm in a complete StarCraft bot. Experiments show that it matches or outperforms all of the individual scripts that it uses when playing against state-of-the-art bots from the 2014 AIIDE StarCraft competition.
—We propose two parallel UCT search (Upper Confidence bounds applied to Trees) algorithms that ta... more —We propose two parallel UCT search (Upper Confidence bounds applied to Trees) algorithms that take advantage of modern GPU hardware. Experiments using the game of Ataxx are conducted, and the algorithm's speed and playing strength is compared to sequential UCT running on the CPU and Block Parallel UCT that runs its simulations on a GPU. Empirical results show that our proposed Multiblock Parallel algorithm outperforms other approaches and can take advantage of the GPU hardware without the added complexity of searching multiple trees.
Advanced Software and Control for Astronomy II, 2008
The Atacama Large Millimeter Array (ALMA) is a joint project between astronomical organizations i... more The Atacama Large Millimeter Array (ALMA) is a joint project between astronomical organizations in Europe, North America, and Japan. ALMA will consist of at least 50 twelve meter antennas operating in the millimeter and submillimeter wavelength range. It will be located at an altitude above 5000m in the Chilean Atacama desert. The ALMA Test Facility (ATF), located in New Mexico, USA, is a proving ground for development and testing of hardware, software, commissioning and operational procedure.
Ground-based and Airborne Instrumentation for Astronomy IV, 2012
ABSTRACT The Gemini Planet Imager is a next-generation instrument for the direct detection and ch... more ABSTRACT The Gemini Planet Imager is a next-generation instrument for the direct detection and characterization of young warm exoplanets, designed to be an order of magnitude more sensitive than existing facilities. It combines a 1700-actuator adaptive optics system, an apodized-pupil Lyot coronagraph, a precision interferometric infrared wavefront sensor, and a integral field spectrograph. All hardware and software subsystems are now complete and undergoing integration and test at UC Santa Cruz. We will present test results on each subsystem and the results of end-to-end testing. In laboratory testing, GPI has achieved a raw contrast (without post-processing) of 10(-6) 5 sigma at 0.4 '', and with multiwavelength speckle suppression, 2x10(-7) at the same separation.

Observatory Operations: Strategies, Processes, and Systems III, 2010
Starting 2009, the ALMA project initiated one of its most exciting phases within construction: th... more Starting 2009, the ALMA project initiated one of its most exciting phases within construction: the first antenna from one of the vendors was delivered to the Assembly, Integration and Verification team. With this milestone and the closure of the ALMA Test Facility in New Mexico, the JAO Computing Group in Chile found itself in the front line of the project's software deployment and integration effort. Among the group's main responsibilities are the deployment, configuration and support of the observation systems, in addition to infrastructure administration, all of which needs to be done in close coordination with the development groups in Europe, North America and Japan. Software support has been the primary interaction key with the current users (mainly scientists, operators and hardware engineers), as the software is normally the most visible part of the system.

2009 International Conference of the Chilean Computer Science Society, 2009
Probabilistic sampling methods have become very popular to solve single-shot path planning proble... more Probabilistic sampling methods have become very popular to solve single-shot path planning problems. Rapidly-exploring Random Trees (RRTs) in particular have been shown to be very efficient in solving high dimensional problems. Even though several RRT variants have been proposed to tackle the dynamic replanning problem, these methods only perform well in environments with infrequent changes. This paper addresses the dynamic path planning problem by combining simple techniques in a multi-stage probabilistic algorithm. This algorithm uses RRTs as an initial solution, informed local search to fix unfeasible paths and a simple greedy optimizer. The algorithm is capable of recognizing when the local search is stuck, and subsequently restart the RRT. We show that this combination of simple techniques provides better responses to a highly dynamic environment than the dynamic RRT variants.
Uploads
Papers by Nicolas Barriga