The efficient market hypothesis states that financial markets are "informationally efficient" in that the prices of the traded assets reflect all known information at any given time. But if this is true, then why do prices vary from day-to-day despite no new fundamental information? The answer involves one aspect that is commonly forgotten among individual traders: liquidity.
Many large institutional trades throughout the day have nothing to do with information and everything to do with liquidity. Investors that feel overexposed will aggressively hedge or liquidate positions, which will end up affecting the price. These liquidity demanders are often willing to pay a price to exit their positions, which can result in a profit for liquidity providers. This ability to profit on information seems to contradict the efficient market hypothesis, but forms the foundation of statistical arbitrage.
Statistical arbitrage aims to capitalize on the relationship between price and liquidity by profiting from the statistical mispricing of one or more assets based on the expected value of the assets generated from a statistical model.
What Is Statistical Arbitrage?
Statistical arbitrage originated in the 1980s from the hedging demand created by Morgan Stanley's equity block trading desk operations. Morgan Stanley was able to avoid price penalties associated with large block purchases by purchasing shares in closely-correlated stocks as a hedge against its position. For example, if the firm purchased a large block of shares, it would short a closely-correlated stock to hedge against any major downturns in the market. This effectively eliminated any market risks while the firm sought to place the stock it had purchased in a block transaction.
Traders soon began to think of these pairs not as a block to be executed and its hedge, but rather as two sides of a trading strategy aimed at profit making rather than simply hedging. These pair trades eventually evolved into various other strategies aimed at taking advantage of statistical differences in security prices due to liquidity, volatility, risk or other factors. We now classify these strategies as statistical arbitrage.
Types of Statistical Arbitrage
There are many types of statistical arbitrage created to take advantage of several different types of opportunities. While some types have been phased out by a more efficient marketplace, there are several other opportunities that have arisen to take their place.
Risk arbitrage is a form of statistical arbitrage that seeks to profit from merger situations. Investors purchase stock in the target and (if it's a stock transaction) simultaneously short the stock of the acquirer. The result is a profit realized from the difference between the buyout price and the market price.
Unlike traditional statistical arbitrage, risk arbitrage involves taking on some risks. The largest risk is that the merger will fall through and the target's stock will drop to its pre-merger levels. Another risk deals with the time value of the money invested. Mergers that take a long time to go through can eat into investors' annual returns.
The key to success in risk arbitrage is determining the likelihood and timeliness of the merger and comparing that with the difference in price between the target stock and the buyout offer. Some risk arbitrageurs have begun to speculate on takeover targets as well, which can lead to substantially greater profits with equally greater risk.
Volatility arbitrage is a popular type of statistical arbitrage that focuses on taking advantage of the differences between the implied volatility of an option and a forecast of the future realized volatility in a delta-neutral portfolio. Essentially, volatility arbitrageurs are speculating on the volatility of the underlying security rather than making a directional bet on the security's price.
The key to this strategy is accurately forecasting future volatility, which can stray for a variety of reasons including:
- Patent disputes
- Clinical trial results
- Uncertain earnings
- M&A speculation
Once a volatility arbitrageur has estimated the future realized volatility, he or she can begin to look for options where the implied volatility is either significantly lower or higher than the forecast realized volatility for the underlying security. If the implied volatility is lower, the trader can buy the option and hedge with the underlying security to make a delta-neutral portfolio. Similarly, if the implied volatility is higher, the trader can sell the option and hedge with the underlying security to make a delta-neutral portfolio.
The trader will then realize a profit on the trade when the underlying security's realized volatility moves closer to his or her forecast than it is to the market's forecast (or implied volatility). The profit is realized from the trade through the continual rehedging required to keep the portfolio delta neutral.
Neural networks are becoming increasingly popular in the statistical arbitrage arena due to their ability to find complex mathematical relationships that seem invisible to the human eye. These networks are mathematical or computational models based on biological neural networks. They consist of a group of interconnected artificial neurons that process information using a connectionist approach to computation — this means that they change their structure based on the external or internal information that flows through the network during the learning phase.
Essentially, neural networks are non-linear statistical data models that are used to model complex relationships between inputs and outputs to find patterns in data. Obviously, any pattern in securities price movements can be exploited for profit.
High Frequency Trading
High frequency trading (HFT) is a fairly new development that aims to capitalize on the ability of computers to quickly execute transactions. Spending in the trading sector has grown significantly over the years and, as a result, there are many programs able to execute thousands of trades per second. Now that most statistical arbitrage opportunities are limited due to competition, the ability to quickly execute trades is the only way to scale profits. Increasingly complex neural networks and statistical models combined with computers able to crunch numbers and execute trades faster are the key to future profits for arbitrageurs.
How Statistical Arbitrage Affects Markets
Statistical arbitrage plays a vital role in providing much of the day-to-day liquidity in the markets. It enables large block traders to place their trades without significantly affecting market prices, while also reducing volatility in issues like American depositary receipts (ADRs) by correlating them more closely with their parent stocks.
However, statistical arbitrage has also caused some major problems. The collapse of Long Term Capital Management (LTCM) back in 1998 almost left the market in ruins. In order to profit from such small price deviations, it is necessary to take on significant leverage. Moreover, because these trades are automated, there are built-in security measures. In LTCM's case, this meant that it would liquidate upon a move downward; the problem was that LTCM's liquidation orders only triggered more sell orders in a horrible loop that would eventually be ended with government intervention. Remember, most stock market crashes arise from issues with liquidity and leverage — the very arena in which statistical arbitrageurs operate.
The Bottom Line
Statistical arbitrage is one of the most influential trading strategies ever devised, despite having decreased slightly in popularity since the 1990s. Today, most statistical arbitrage is conducted through high frequency trading using a combination of neural networks and statistical models. Not only do these strategies drive liquidity, but they are also largely responsible for the large crashes we've seen in firms like LTCM in the past. As long as liquidity and leverage issues are combined, this is likely to continue making the strategy one worth recognizing even for the common investor.