Search our website

mdPlay: Assessing the price performance spectrum of replay mechanisms

1st Dec 2020

Beeks Analytics recently launched its new mdPlay service, enabling subscribers to precisely retransmit previously captured market data streams in pre-production environments.  Understandably, each client’s exact requirements vary and whilst nanosecond range replay precision will be an essential pre-requisite for some subscribers, for others microsecond accuracy will more than suffice.

To ensure that these differing needs can be effectively met, Beeks Analytics offers the mdPlay service on three different platforms that span the price performance spectrum. So where, for instance, a client wants to test a system that’s sensitive to timing and the traffic they’re trying to reproduce has a significant proportion of messages which are at a certain resolution, with a certain fidelity, we can provide them with the appropriate tool to preserve this as accurately as possible.

A prime example of such a client might be a firm keen to accurately reproduce microbursts because they need to be able to make sure that their algorithms (whether they’re operating software running on a standard CPU, or they’re embedded in an FPGA) can handle these bursts, as well as being able to reliably recreate any issues.  So for these types of firms, mdPlay needs to be able to offer replays that will pump out, very quickly, a very high bandwidth pulse of data without it being spread over a wider range of time, as well as making it look just like the original pulse that may have caused the issue for them.

In advance of the service’s launch, we extensively tested each of the three available platforms to assess the accuracy of the replayed market data compared to the original capture files.  In this blog I’ll be describing our methodology and explaining what the results revealed.



To conduct these tests, we initially captured a busy hour of market data traffic from one of the world’s largest exchanges.  The particular exchange was chosen because we knew it to be representative of the type of data our clients are likely to wish to replay.  The capture technique employed achieved this with 10 nanosecond tick precision.

We then transmitted this data using each of the three different tools, and re-captured it (for this we used a specific type of switch known to introduce very little jitter).  We also synchronised the clocks on the capture side and the re-transmit side as accurately as possible, using either PTP (or where feasible) a PPS signal.  Having performed these tasks, we were then able to compare the original capture with the new one.

For each packet in each capture, we calculated the timestamp delta from the first packet in the capture (the “absolute” timestamp), and the timestamp delta to the previous packet in the capture (“the frame-to-frame delta”). We then compared these figures to see how close the original capture was to the new one.  Where a difference in the absolute timestamp delta was revealed, this demonstrated drift in the replay mechanism.  Additionally, we interpreted a difference in the frame-to-frame delta timestamp to indicate the amount of jitter in the replay.  Here’s what we found….



With the premium platform the degree of accuracy achieved significantly surpassed that of the other two offerings. It was able to achieve +/-20nanosecond replay accuracy, with all packets retransmitted.

This particular replay mechanism operated dedicated replay hardware and even when it was challenged with transmitting a big buffer of packets simultaneously, it was able to accurately replay the packets using its on-board clock.  In doing so, eliminating the possible situation where the replay mechanism receives a packet and is waiting for the operating system (OS) to get to a certain time before it can receive the next packet on the card.  This is because the packet’s already with the card and the card does the retransmission in hardware.

Also, this was the only platform able to synchronise replays on different ports accurately.  In fact, the other platforms didn’t offer any features in their standard tool kits to do this, it had to be done by hand.  For clients operating algo trading systems in ultra-time sensitive environments, the ability to accurately synchronise multiple replays on different ports can be very useful as it ensures that packets are put on to the wire in the correct order.

Comparatively the second replay mechanism we tested, the standard platform, whilst able to accurately retransmit the vast majority of packets did experience occasional jitter and delay of up to 700 microseconds.  However, this was limited to a very small fraction of the total number of packets being transmitted.  99.355% of all packets were replayed within 1100 nanoseconds compared to original capture file.

This platform also required specific replay hardware, but with this particular tool the packets were waiting in software for the right time to be released. However, once it reached that time the card was able to retransmit the packets very quickly as optimisation techniques avoided the need for the packets to travel though many of the OS layers.

The final replay mechanism tested was an open source tool that does not require dedicated replay hardware.  Because this tool retransmitted files through a regular interface (versus employing optimisation techniques to circumnavigate certain OS layers) the retransmission accuracy suffered.  Furthermore, whilst the other platforms were able to handle nanosecond precision capture, this tool was limited to being able to access the previously captured data with microsecond precision.

An interesting feature of this platform was the drift it introduced overtime.  The errors in each packet accumulated so there ended up being a constant drift. In one of the tests we ran, the retransmission of an hour’s traffic took an hour and a minute to replay.

In summary, there was a decline in accuracy (whilst mild) experienced between the first and second replay mechanism we tested and then a more substantial differential evidenced between these commercial offerings and the open source tool.  However, it’s also important to consider the cost / accuracy trade-off these platforms present.  For certain firms for whom nanosecond accuracy isn’t essential this balance may easily swing in the direction of the open source mechanism.

Share article

LinkedIn Twitter Facebook Email