All algos are not equal

February 12, 2021

Algorithms have made their way into our daily lives, from optimizing our inboxes to ordering our groceries. They often model and predict our day-to-day tasks, eliminating manual and repetitive actions. They also do the work we assign to them more quickly and reliably. Given their ubiquity, it’s no surprise that they’ve found their way into the trading world, where this translates to a streamlined business line that focuses more on profitable strategies and less on the execution of those strategies.

In the repo domain, algorithmic trading automates parts of the process used to execute orders.

Financing desks can upload positions, distribute orders across markets, and execute on the best possible prices at any given time. By definition, using an algo makes you more competitive by applying less effort, and thus moving both your profitability and overhead in the right directions.

However, by handing over control to our software we do introduce significant risks. We’ve all heard stories of what can happen when the combination of people and software goes wrong. For example, Knight Capital’s algorithmic trading strategies lost $440 million in 45 minutes, decimating their equity value in a couple of days. The effects reached far beyond Knight alone, resulting in extreme market volatility, suspended stocks, and canceled trades at NYSE. The consequences for firms, or even entire ecosystems, can be disastrous.

On an individual level, firms conduct meticulous code reviews to produce solutions that are deemed fit and safe for their production environments. Testing is manageable within a bank’s ecosystem. But an industry is more than that— it’s an ecosystem of ecosystems. The interdependent network of participants that make up a market needs to be resilient, far beyond the four walls of a single entity. How do you protect it? Can you ask all the actors to test whenever one of them makes a release? The answer is you can’t. The approach regulators took for MiFID II was to limit risk by reintroducing the human, and therefore limiting automation.

The MiFID II regulation defines algorithmic trading as “trading in financial instruments where computer algorithms make decisions regarding various parameters of an order such as timing, price, quantity and post trade processing, with no human intervention.”*

Examples we see in the repo space could be:

  • Orders placed before markets open and automatically sent when they do.
  • Cases where the price is automatically calculated with a varying spread on a reference price, or
  • When quantities are automatically adjusted to react to trading activity.

Broadly speaking, if there are cases where orders are created or updated outside of a human action, our systems become subject to regulatory controls**.

So, when the rest of the world around us is innovating and becoming more and more algo driven at such a swift pace, why should trading firms have to pump the brakes? The answer is, they don’t.

Let’s take a closer look at the requirements for trading algos as set out in several articles of MiFID II. Their common theme is control. In production, these algos are subject to:

  • Real-time monitoring.
  • Pre-trade controls.
  • Implementation of a kill switch. That is, the ability to easily cancel all outstanding orders.
  • Annual stress tests to ensure that software and infrastructure performs well under high volumes of activity.

To meet these requirements, some firms implement manually managed controls, and some even opt out of the algorithmic race altogether because of them. The additional risks, scrutiny, and overhead are often deemed to partially or totally outweigh the potential benefits of implementing trading algos.

For those who participate and aim to satisfy regulatory requirements, a typical control solution might allow users to monitor thresholds and set absolute limits that will then be used in pre-trade checks. For example, an order will be sent if its price and quantity are within predetermined ranges. The limitations of these solutions are in the static and isolated nature of their approaches: yes, we can set safety thresholds based on what we believe is reasonable, but markets by nature are not static so they require constant monitoring and adjusting.

In critical systems, the safety controls are put in the hands of the machine — consider self-driving cars or the auto-pilot feature on a plane. The safety features must of course be bullet proof but as a core paradigm, humans should only manage controls by exception. In other words, traders shouldn’t be expected to monitor limit thresholds or adjust pre-trade controls throughout the day. Instead, safety algos could and should implement adaptive measures that intelligently react to market conditions and alert the user when something’s not quite right.

In contrast to trading algos, the purpose of the safety algo is to minimize each firm’s exposure to risk. In fact, they can augment and enhance protections while meeting the goals of automation. That is, removing human error and the need for manual monitoring can free up time for businesses to focus on more strategic endeavors. In this way, firms can continue to innovate and move forward without fear. Those who abandon aspirations of automation altogether miss out on the competitive advantages these systems bring. That’s a mistake.

Rather than wage war on algos or retreat from them, we should double down and embrace them. They’re not good or bad. They’re opportunities that constantly need to be assessed to ensure they remain under our control.

Anvil product logo icon

Learn more about our solutions

Our clients use a tailored combination of ION standard products and services that cover real-time trading checks, automated testing, annual stress-testing and reports, and smart safety monitoring.