page-loading-icon

Search our website

MiFID II: Clock synchronisation and meeting the divergence challenge

8th Dec 2015

MiFID II will introduce the requirement for trading venues, their members and participants to synchronise the business clocks used to record the date and time of reportable events to UTC (Coordinated Universal Time). According to the latest regulatory technical standards, published in September, for this information to be valuable, these parties will need to adhere to a maximum divergence from UTC. The degree of maximum divergence varies for trading venues based on their trading systems’ gateway-to-gateway latency time, and for members and participants based on the type of trading activities being performed.

Standardisation has delivered significant benefits to many parts of our industry and as per my recent blog personally I think that MiFID II’s clock synchronisation rules could present significant additional advantages beyond the brief satisfaction of ticking a compliance box. However, that’s not to say I think meeting the new obligations will prove particularly easy.

I’ve worked with enough firms keen to achieve very precise timestamps for performance measurement purposes, to understand just how difficult aligning timed events to a common time source can be. For these firms the good news is that once this has been achieved, they can generate significantly more accurate timings when they are reconstructing trades, generating reports and performance statistics. This can provide considerable insight into how they can further enhance and oversee their trading processes.

The problem faced in getting to this degree of accuracy lies in the fact that there are so many stages where divergence can start to creep in and if the MiFID II mandates are to be achieved, everything is going to have to be very accurately synced up. Especially if a trading venue’s members or participants operating high frequency algorithmic trading techniques are to successfully achieve a maximum UTC divergence rate of 100 microseconds.

 

Clock source options

If we take it from the top, according to the latest regulatory technical standards, published in September, firms will be required to synchronise their business clocks to UTC “issued and maintained by the timing centres listed in the latest Bureau International des Poids et Mesures Annual Report on Time Activities”. Or “with UTC disseminated by a satellite system, provided that any offset from UTC is accounted for and removed from the timestamp.”

So you can either source your time from your country’s official timing centre, such as the UK’s National Physics Laboratory (NPL), or via GPS (the global positioning system that uses satellites and stations on earth to provide a very accurate time source). Although, the jury is out over whether this is what is meant by satellites, or by some other future, and as yet undefined, satellite system!

 

Evaluating the accuracy of your clock source

An initial point I want to raise is that UTC is a concept versus a physical constant; it’s the average time of the atomic clocks of each participating country. These timing centres will have a series of clocks and each clock will have a weighted input into the overall international UTC, with clocks based in certain countries granted a higher weighting than others. Additionally, UTC will also have a weighting factor for the physical movement of the earth around the sun, as this can also cause variation.

As a representative from one country’s national timing centre stated at a recent event held in London, the atomic clocks run by these national centres can vary by plus or minus 20 nanoseconds from UTC over the course of a year. So if a firm is looking at a particular reference clock, it’s worth noting that even those clocks may be mildly offset from the theoretical standard of UTC.

Should a firm want to examine the satellite option, they will need to bear in mind that whist GPS offers an atomic clock that’s incredibly close to UTC, it isn’t exactly UTC. A GPS antenna clock will look at multiple satellites and calculate the average. Therefore, it’s not surprising that new regulation will require the offset from UTC to be accounted for.

Once the source is determined, the next point for consideration is how the firm gets the time into their own physical clocks.

 

Grandmaster: Transportation and recovery

Most large banks will operate their own official clock, called a ‘Grandmaster’. The way in which time is transported to their Grandmaster needs to be taken into account. This could be for instance over fibre or via a GPS antenna located on their building’s roof. If it’s the latter, then the firm will need to adjust their clocks to accommodate for example, the length of the cable between the Grandmaster and their antenna. When you consider that it can take a nanosecond for time to travel through just 20-30 centimetres of cable, this can add up.

The firm will also need to contemplate the recovery of that time by the Grandmaster. Different clocks will use different types of algorithms to do this and again it requires cogitation.

Once these delays have been accommodated, a firm should have a good approximation to UTC in their building.

 

Internal time distribution

The next step is the distribution of time across the firm’s network to the servers, switches, hardware and software devices requiring a very accurate time source.

A firm needs to consider whether they want to use Network Time Protocol (NTP), Precision Time Protocol (PTP), Pulse Per Second (PPS) or other means to disseminate the time to each server.

 

Server time recovery

At each server, when presented with the distribution of time, the decision must be made how to recover this distribution and how to keep this in synchronisation with the Grandmaster.

Bearing in mind that a large bank could have 50-100 different operating systems, the requirements of each will need to be catered for and the divergence implications considered.

 

Server or device consumption

Once the server or device’s clocks have been synchronised to UTC, the firm will need to address the divergence effects generated by how the applications will then use that time.

Applications can face delays in getting time from the operating system, a hardware device or a clock in the server. The CPU could be doing millions, if not billions of cycles a second as it runs through various commands. Just because a particular application has requested the time, it doesn’t necessarily mean it will get it straight away. There is a chance the application will need to wait for another process to complete before the system will deliver the time.

This can be a particular problem if, for example, the application is written in Java as the garbage collection process may be taking place at the time of the request, resulting in a few microseconds offset.

The final point for consideration is how quickly the application is able to apply the time stamp to the message. Once this is done, you’re good to go.

 

In summary, considering all of the things you need to get right for clock synchronisation is a bit like peeling an onion. Once you’ve got everything precisely synced up at one layer its time to start on the next and work out how you can condition your clocks at this layer to an exacting standard.

However, meeting the new rules is likely to be a costly business. I’ve heard budget estimates of a couple of million for the large banks, which isn’t unrealistic when you consider they may have trading systems located in multiple datacenters across Europe. Therefore, getting everything synced up to a level that will facilitate compliance with the new MiFID II mandate could prove to be quite a considerable job.

 First published by The Trading Mesh

Share article

LinkedIn Twitter Facebook Email