Search our website

Assessing FX pricing data to enhance trade profitability levels, client experience and capacity planning

8th Jun 2020

A lot of FX market data is delivered on TCP steams, allowing FX venues to target their pricing.  Because TCP is ‘reliable’ (in contrast to UDP, which delivers the vast majority of equities market data) it’s actually relatively easy to determine from the network if you’ve got a channel level problem – the quality information is embedded within the TCP session.  Slow consumers result in collapsing window size, sequence gaps will result in TCP level retransmits – these things may be invisible to your application, but can cause a backlog of prices. Alternatively, if the errors are bad enough, the TCP session will drop and the app will have to reconnect.

Being alerted to these types of issues is exactly the type of insight that monitoring FX market data at a channel level will deliver.  Channel level monitoring is all about understanding the health and stability of your networks and your connections to the outside world.  If the networks are looking unhealthy from a technical perspective you can investigate internally, you can call up the liquidity provider to see if they have an issue, you can try an alternative path to get to the venue.  So, there are a series of network level actions you can take if you start to see channel issues.

When we’re looking at the content level, assuming that the channel is looking healthy (it’s up and you’re communicating effectively) what we want to know is, are we actually getting all of the prices that the market is supposed to be delivering to us?  So at its simplest level, it might be that you’ve asked the market (through a start-of-day request or pre-arranged configuration) to deliver prices for certain currency pairs at a number of tiers.  Content level assessment will tell you if the market is delivering all of those requested prices.

At Beeks Analytics (formerly Velocimetrics) we do this by detecting the presence of the individual pricing streams and assessing both the timings of the ticks within those streams and the price movements taking place.  So we’re looking at whether the currency pairs are ticking at the rate you’d expect and price changes are within normal bands.

To do this we use probabilistic detectors to build up a picture over time of the profile of a tick rate on a per instrument basis.  We compare this to current behaviour to determine if, for instance, the interval between ticks is usual or unusual.  Instead of having absolute thresholds, we ask the detectors to raise an alert if something happens that, according to the symbol’s profile, has an x% chance of being anomalous; meaning that the alert threshold varies throughout the day, and does so independently for each currency pair.  Price movements can also be assessed using probabilistic detectors, looking at the movement of each symbol’s buy and sell price independently compared to the particular side’s historical profile.

Being notified when things move outside of the norm can provide FX brokers with insight that can be used to improve their trade profitability. Take, for example, the pricing data being received from liquidity providers.

If you’re consuming prices from multiple sources, it’s important to know that you’re receiving a timely and complete set of prices from every provider.  Consider a situation where you’re unknowingly building a picture based on only three out of four liquidity providers actually delivering the information you believe is contributing to your pricing decisions.  If the missing fourth stream would have made even a fraction of a difference, you could find yourself publishing suboptimal quotes.  By assessing the data at a content level, you’d pick up straight away on the fact that you’re missing out on an entire stream of liquidity that could be helping you.

Also, by assessing the data content, you could compare tick rates and price movements between liquidity providers; on the whole, you’d expect your providers to be delivering similar prices, at a similar rate.  This analysis could reveal that a certain provider is having issues, or that they’re conflating prices before they send them to you.

Armed with this insight, you might start by looking at your own configuration, check, for instance, the subscriptions that should be set-up at the beginning of the day are as they should be – so you’re actually asking for what you believe you should be receiving.  If this is correct, you might want to then contact your liquidity provider and find out what’s going on and if, for instance, you need to move onto a newer gateway or alternative network to avoid receiving conflated prices.

On the opposite side, assessing the data being sent and received by clients can also provide valuable insights.  You understandably want your clients to trade with you rather than anyone else, so by generating metrics such as quote request to response latency you can continuously determine whether client SLAs are being met.  You could also conduct tick-to-quote latency monitoring to determine if your pricing out to clients is as instantaneous as possible, based on the prices being received, and identify opportunities for internal improvement.

If a firm consuming information about their own performance back into their pricing system detects that they are pricing slowly and failing to keep pace with the moving market, they can act quickly to avoid the risk of being picked off.  An obvious response would be to dynamically widen spreads to mitigate the risk of offering loss-making trades.

Alternatively, if it’s detected that the tick rates from a particular venue are going through the roof, and this is adversely impacting the speed of your pricing systems, you could choose to disable connections to that provider in order to reduce the load on internal systems.  Thus managing the operational risk implications that volume spikes could cause.  Longer-term, monitoring these types of trends can also be useful for infrastructure planning, enabling firms to determine when they need more pricing hardware or increased client gateway capacity.

Furthermore, some firms are keen to use data content assessment to monitor how long it typically takes particular clients to action quotes.  They can detect if certain clients are always trading towards the end of the quote’s lifespan, which might be an indication that they are receiving data from other venues that they’re trying to arbitrage against.

This richness of insight just isn’t possible if a firm is only assessing data at a channel level.  Doing so can mean a firm misses out on nuggets of valuable information, that would otherwise enable them to:

  • Stop the continuous publication of poorly optimised quotes in their tracks
  • Understand how FX trading practices can be continuously improved
  • Gain a stronger understanding of client behaviours and address those that could be potentially harmful to the broker
  • Identify opportunities to improve client experience

All of which can help FX brokers to optimise their offering and boost trading profitability in today’s increasingly competitive market.

Share article

LinkedIn Twitter Facebook Email