Rotman International Trading Competition Algorithmic Trading Aftermath
Rotman International Trading Competition Algorithmic Trading Aftermath
Rotman International Trading Competition Algorithmic Trading Aftermath
Trading Aftermath
E. Dan Glikstein
Abstract
The Rotman International Trading Competition occurred from Thursday, Febru-
ary 18th to Saturday, February 20th , 2016. The Concordia team placed 13th in the
algorithmic trading case – its best historical result. In contrast to last year’s trad-
ing paradigm, we moved away from fully-automated trading and adopted a semi-
automated, or computer-assisted, paradigm. This change allowed us to successfully
adapt to abrupt changes in market dynamics. Furthermore, it greatly simplified the
algorithm’s design which may prove beneficial for future maintenance as well as ame-
lioration.
1 Context
The algorithmic trading case at the Rotman International Trading Competition (RITC)
is a series of 12 five-minute contests, divided in four heats. There are three stocks and
one exchange traded fund (ETF) which seeks to track the performance of a basket of
securities comprised of the three stocks. As such, the basket – that is, the stocks –
and the ETF can be viewed as close substitutes. When the prices of the ETF and the
basket diverge, an arbitrage opportunity arises. Exploiting this arbitrage opportunity
is the major goal of the algorithmic trading case. Given that the securities in the
market do not all have the same liquidity, efficient liquidation of positions becomes
a major concern. Furthermore, the case is such that inventories are shocked with
position transfers of random magnitude and direction. These exogenous perturbations
to the inventory must be counter-acted by frequent rebalancing of the algorithm’s
positions. Lastly, the finite liquidity and the presence of up to 14 other participants
in the market make the occurrence of so-called “flash-crashes” a relatively common
phenomenon as less sophisticated algorithms consume too much liquidity on one side
of the limit order book in a very short period of time. Regarding the fee-structure
of the case: while market-orders are charged transaction fees, liquidity providers are
compensated through rebates. Participants who do not have a null net position at the
end of the case are fined $0.50 per share.
1
2 Automated Trading
Last year’s algorithm traded in a fully automatic fashion with more or less success.
We sought to re-use the entire order management and book analysis this year while
replacing the econometrics module for a more sophisticated Matlab-based program.
In this way, the algorithm would use Matlab to perform model-fitting and forecasting
of price and volatility while using VBA for the order submission and management
and Excel for the data acquisition. This three-layered approach worked very well in
initial tests both for market-making and trend-following. However, it fell apart when
confronted to the volatility experienced in the practice round. (See Figure 1.)
The econometrics module relied on last price data in order to fit a model to the
returns and, from there, forecast the volatility and the price for the near future. A
significant challenge to this was the scarcity of data. While we sampled the market in in-
tervals ranging from 10ms to 1400ms, many models were found to be unsuitable for this
competition. It should be noted that the market dynamics of each five-minute round
should be assumed to be more-or-less independent from other rounds. Consequently,
we could not reuse market data from previous rounds to estimate the parameters of
our returns models. Some models were immediately foregone because by their com-
plexity the estimation time and the minimal amount of data required were too much.
For example, we did not find BEKK or other multi-variate GARCH models suitable
for the task, despite the obviously inter-related nature of the securities’ returns. We
did attempt to use GJR proper as well as variations which included liquidity measures
and a volume process in the conditional variance equation. However, we settled with
a simple AR(1) - GARCH(1,1) model which proved to be fast to estimate, requir-
ing relatively modest amounts of data, and for which forecasting was straight-forward.
(Forecasting with a volume process in the conditional variance equation proved difficult
because we would have to either fit a time-varying Poisson process to the volume data
or knowingly make the false assumption that the volume process was a homogeneous
Poisson process. Even more difficult was forecasting the liquidity measure since the
data available did not allow us to gain an understanding of the dynamics of liquidity
addition and consumption – and cancellation – on each side of the limit order book.)
Equation 1 outlines how the returns on each security were modeled where the symbols
have their traditional meaning.
rt = µ + βrt−1 + t (1a)
t ∼ N (0, σt2 ) (1b)
σt2 = α0 + α1 2t−1 + 2
α2 σt−1 (1c)
Once we had fit our model, we would forecast the returns and variance for a number
of steps. Using this forecasted return, we would compute bid and ask prices taking
into account our inventory of the security. This allowed us to place our quotes asym-
metrically around the mid-price in order to control inventory. Lastly, to avoid naively
providing liquidity to arbitrageurs we would consider the “fair-value” of the security.
As mentioned above the market-making proved to be formidably profitable in a
“stable” market. See Figure 2 for an example of its performance on a price path
with changing volatility, trends, and sudden adjustments. With a few changes to last
2
year’s fully automatic computer trader, we were now capable of efficiently market-
making, capturing trends and arbitrage opportunities, and managing our inventory.
Unfortunately, when subjected to violent markets such as the one to which Figure 1
bears witness, the quotes which came out of the model were too deep in the book to be
of any use – a three dollar spread in our quotes was not uncommon. Simply narrowing
the spread used proved disastrous in “stable” markets as the it would become too
narrow for the profits to cover losses from missed forecasts, and the rate at which
orders must be submitted and canceled would exceed the permitted rate from the
market.
Given the success of the market-maker in volatile but stable markets, it was with
sadness that we abandoned the fully-automated paradigm in order to adopt a more
robust semi-automatic approach.
3 Computer-Assisted Trading
When we came to the realization that a sophisticated fully-automated algorithmic
trader was too delicate, we proceeded to build a semi-automatic algorithm. While
the execution was delegated to the software, the high-level decision-making was left
in the human operator’s hands. The operator would be assisted by the algorithm.
With this approach, the operator would evaluate the market conditions and the signals
provided by the software. Then, using a series of controls, the operator would instruct
the software to perform different tasks, such as open a set of positions, close positions,
provide liquidity. The semi-automatic software did what the human operator could
not: find liquidity in a fast-moving market. It could also automatically liquidate
positions near-optimally either consuming liquidity or providing it with an emulation
of immediate-or-cancel orders depending on market conditions. While the software was
able to automatically decide how to liquidate, it is the human operator that decided
when to liquidate. Similarly, the software provided safeguards against excessive risk
to prevent taking seemingly unfavourable positions. However the human operator
could, if he thought the opportunities were sufficiently profitable, bypass some of the
risk safeguards while using the software’s ability to find liquidity and provide superior
execution.
Leaving the high-level decision-making to the human operator greatly simplified
the algorithm’s design. While it now required a more complex set of controls, the
programming could all be done in VBA. The code is split by function but the call
stack rarely exceeds a size of three.
4 Review of Performance
4.1 Heat 1
The first three rounds went well. Our profits exceeded our neighbouring competitors’
– Chulalongkorn and Duke – by significant margins and we felt comfortable in the
robustness displayed by the algorithm. We recall sensing some oscillatory behaviour
in the inventory when the algorithm was liquidating. Both the first and second round
3
in heat had significant arbitrage opportunities while the third did not. We relied on
trend-following to minimize the losses incurred from the inventory shocks.
4.2 Heat 2
The first round of this heat had two or three arbitrage opportunities which we used
to eke out a very large profit, far exceeding those of Baruch, Boston, and LUISS. We
did not notice inventory oscillation. The second and third rounds, on the other hand,
produced very large losses. We are unsure of whether the server was slowed down
intentionally by another market participant or whether another technical or program-
ming glitch occurred. We would witness our limit-orders be inscribed in the order book
and executed even though we ran commands to cancel all our outstanding orders. We
learned that our orders were queued before they were inscribed in the books and that
our cancellation commands would not affect orders not yet inscribed. During the night
between Heat 2 and Heat 3, we added some extra execution safeguards and improved
our immediate-or-cancel emulation to also cancel orders not yet inscribed in the limit
order books.
4.3 Heat 3
The first round of this heat went very well. We used the three arbitrage opportunities
to generate nearly $57000 in profits, compared to neighbouring Fairfield’s $13.1k and
North Alberta Institute of Technology’s $40k. The second round was also very prof-
itable with two very large and one large arbitrage opportunity. We beat Fairfield but
NAIT came ahead of us. The profit and losses graph for these two rounds are much
smoother, indicating that we had successfully resolved the inventory oscillation prob-
lem. The third round did not have any significant arbitrage opportunities. We made
some profits providing liquidity early on in the case but lost significant amounts on
the position transfers. Nonetheless, we followed a couple trends and used our superior
liquidation to end with a loss of about $1.3k compared to our neighbours’ $23.1k and
$24.8k losses.
4.4 Heat 4
The first round of this heat had one very large and one large arbitrage opportunity. We
were flanked by Laval which beat our $57.8k with $63k and Waterloo whose paltry $15k
we both exceeded. The second and third rounds did not present much opportunity for
arbitrage. Trend following had worked twice before so we gambled that we could make
a profit by following trends. However, the trends were not sufficiently large to cover
our liquidation costs and this surely cost us in the ranking as we ended with small
losses compared to our neighbours’ small profits.
4
the unsuccessful gambles of the last. Some aspects of the operation could be automated
without the human operator losing any high-level decision-making power. While the
liquidation procedure performed well, forecasting market impact, if at all feasible,
would be highly desirable. Lastly, future versions should create a time-series of a liq-
uidity measure and present a visual summary. Future versions should also provide a
visual summary of the profitability of the arbitrage opportunity and allow for automat-
ically entering partially hedged positions. If the structure of any future version permits
it, trailing stop-losses should be a feature to automate liquidation of a trend-following
position.
5
Figure 1: One second candle chart of the ETF’s last price during the practice round
6
Figure 2: Top: One second candle chart of a stock’s last price in test cases. Bottom:
Profits and losses of the automated market-maker.
7
Figure 3: Control board view of LiquidDash1.2 a semi-automatic algorithmic trader.
8
Figure 4: Price paths of the three stocks, POOH, TIGR, EYOR, and the exchange traded
fund, HUNY in the first round.
9
Figure 5: Profits and Losses. Left to Right. Top: choppy liquidation hinting at position
oscillation in round 1.1; when “cancelled” orders get executed in round 2.2. Bottom:
smoother liquidation and big arb in round 3.1; riding a trend in round 3.3.
10