Financial systems are drowning in data. Complex, multiple and legacy systems, regulatory requirements, the proliferation of electronic trading – all these factors conspire to generate vast data lakes that many firms struggle to deal with and gain any meaningful insight from.
One of the major problems is the lack of connectedness of the data – financial institutions don’t have the right level of data visibility – and that is down to a number of factors.
Firstly, firms often have issues with capturing data in the correct way – it might be loosely or incorrectly time-stamped or it might be that the very act of capturing could interfere with systems and impact accuracy. Secondly, data storage often presents issues – when stored as simple isolated events, data can appear to degrade becoming increasingly opaque as business processes change over time, giving firms a skewed, or at worst impenetrable, version of reality.
This is particularly acute when trying to establish cause and effect – the nature of financial systems is that they are a complex web of infrastructure and applications and are also linked to external systems. This results in difficulties in establishing clear and accurate inter-relationships and correlations between what should be related data sets.
So signal integrity – based on “true” data – should be paramount when it comes to making accurate assumptions about the business. But what is signal? And, come to that, what is integrity?
Based on our experience working with numerous investment banks, exchanges and brokers, we know that signal means different things to different people depending on your business and the position you hold within that business. Operational support people will have a very different interpretation of signal than a trader, for example; the former being focussed on “is the system working” whilst the latter’s priority is “is the business process working and are we making money”.
But establishing meaningful signal from the wall of data noise that engulfs every financial firm is important to everyone. Take this scenario – as a trader, outliers might be killing your profitability. Whereas the vast majority of your market data ticks might be ticking along fine, a number – let’s say 5% – are taking an unusually long time to process. Identifying the outliers in the tick stream and investigating their common features is the first step towards solving those processing issues, which will – in turn – reduce loss-making trades.
Another important signal might be identifying the process hops that are adding latency to a trading engine. Establishing where the bottlenecks are occurring is the first step to solving those processing issues and reducing latency. Now you might think latency is purely a systems issue, but when you consider that latency impedes trading velocity, it is very much a business issue – trading speed has a direct impact on the bottom line
So if signal is a red flag that highlights real problems within the business, what ensures the integrity of that signal? Again, it’s multiple factors and all will be relative to different people within the business depending what role they do. For example, it might be accurate time stamping, or the ability to capture and analyse data in real time. It might be real time cause and effect analysis – tick to trade – the best measurement of the efficacy of trading. It could be real time slice and dice analysis by client, trader, component or exchange giving you deep insight into patterns within the data stream.
As a financial firm, you are so deluged with data, it’s important to be able to separate the signal from all the noise and to do so with pinpoint precision and lightening speed. In a post MiFID II landscape, firms are moving on from tick box regulation to understanding what they can derive from these enormous data lakes to make meaningful assumptions – the use of data to improve the bottom line and maintain a competitive edge. And in the dog eat dog world of trading, you need all the edge you can get.