page-loading-icon

Search our website

Effective system and algorithmic testing: Why using real-world data really should be a pre-requisite

26th Jun 2016

There will always be edge cases that weren’t considered when a system was originally tested. You probably know what I mean, those cases where the little nuances you don’t see everyday emerge. These can occur for instance in big IPOs or on triple witching days, when the stars align just so, to create a very unique set of circumstances that nobody ever considered creating a test case for. These are the scenarios that can really put algos and feed handlers through their paces, and it’s when having tested systems with real world, versus synthetic data, that it can make a huge difference.

Having worked with a number of clients to help improve the effectiveness of their testing environments I’ve learnt a few things along the way, and I have to say, I don’t think you can underestimate the value of employing genuine, real-world data both in terms of content and rate.

When firms use synthetic data or take part in weekend or overnight testing with the exchange (so it’s still semi-synthetic, as it’s not from a live feed) they never seem to be able to get as close to reality as real world data allows.  One of the reasons for this is the fact that synthetic data doesn’t usually cover the entire specification of the market data protocol and even when it does, it hardly ever covers every possible combination of message or the different sequences of events that could occur. Take for example testing a gateway with synthetic data; in doing so you wouldn’t necessarily be testing it for every single possible message that could pass on a particular protocol.

The problem is then, if you can’t get as close to reality as you need, there are always going to be edge cases that throw your systems off because you haven’t tested for them – and honestly these are the ones likely to cause the next big market rattling event.

 

Real-world data and its variances

The thing with real data is that it gives you so many variances in terms of volume, rate and the sequence in which events take place, and because it’s not only from a functional perspective that testing needs to be considered, this degree of diversity is really important.

 

Testing for volume volatility

I’ve seen many cases where people think testing is all about data volumes – but that’s only part of the story. Its OK saying ‘the rate is now 5k a second how are we doing? Now it’s 10k – still good? Now it’s 15k – how are we holding up?’ However, there isn’t a market in the world that has a stable rate, so how effective can this approach be in reality?

Any live feed on any system is likely to be very, very bursty and this type of load testing doesn’t always accommodate for that. So being able to test volume volatility can make a real difference, only then are you testing how stable your systems will prove to be, when faced with these variances in production.

 

Testing for sequences of events and reproducing the exact rate of events

A second thing that needs to be considered is the sequence of events and the rates at which they occur. Take for instance, an algo being tested that has been enhanced after it reacted in an unexpected manner to a market event. In this type of scenario you ideally want to be able to test the improvement using the same data that caused the original issue. Whilst synthetic data may allow you test the improvement with content that seeks to be reflective, some firms struggle to replicate the precise sequence of market data events at the original rate in which they played out. Doing so could for instance involve introducing several other feeds into the timeline interwoven at exactly the same time as they did in production and being able to reproduce inter-packet gaps and other idiosyncrasies that could have impacted the system’s behavior.

Unless you can run test scenarios in a very controlled way by simulating the exact rate of the data and sequence of events, how do you know if an improvement has actually worked? How can you guarantee that under the same circumstances the problem won’t reproduce and that you haven’t inadvertently messed anything else up in the process?

It’s only by precisely recreating the volume, rate and sequence of events and testing enhancements against these, that engineers can be sure the improvement has been a success.

 

The expense of generating versatile synthetic data

To replicate such scenarios, in a way that allows you to access the real-data that actually occurred, at that particular point-in-time, can save huge amounts of money and effort. Let me be clear here, you can get synthetic data to be pretty good. If you wanted you could get your team to understand all of the protocols and make sure you are reproducing the data as accurately as possible – but this approach can very quickly get very resource intensive and expensive.

A further point to consider is that by default using real-world testing data, teams often cover more possible scenarios that a trading system might go through, than even the most imaginative person could create using synthetic data. Whilst this doesn’t guarantee that every possible case will be tested for, its fair to say that performance against a much broader array of relevant and likely scenarios will be evaluated than by only using synthetic alternatives.

 

Testing how the whole chain of events works together

A further angle that needs to be considered is how all components work together. Trading systems operate like orchestras, every instrument can sound perfectly in tune on its own, but if they are not playing in harmony it can sound horrible and the desired result just won’t be achieved.

As most feed handlers or smart order routers don’t look at just one market, testing on a single market has its limits. Effectively testing these systems really requires recreating the whole thing, an entire region or even global coverage of a specific asset classes’ market data. Take for instance a system designed to break down orders by market condition – to effectively test this you would need real world market data that can throw the various conditions at it.

Firms that have an arsenal of real world data, enabling them to recreate these markets and test these systems, are really able to understand how all of the orchestra’s musicians will play together when the unexpected happens.

 

Effective system and algorithmic testing for confidence

For all of these reasons, I feel that employing real world data along with proper tools to accurately replicate its rate into test environments is the only way firms can really prepare for the edge cases and understand how their systems are likely to react to a crazy day in the market. You can get synthetic data to be really good, but in my opinion nothing will replace the nuances and subtleties that real world data presents.

Share article

LinkedIn Twitter Facebook Email