App timestamping and MiFID 2: Myths and mysteries
Click here to download this entire blog series as a single PDF.
The STAC-TS Working Group has poured a lot of effort into developing standards and software for demonstrating timestamp accuracy to the satisfaction of European regulators concerned with RTS 25 of MiFID 2. Much of that effort has concerned software-application timestamps. Application timestamping is crucial since it's often the least expensive way to comply with the reporting requirements of MiFID 2. That's because most apps already timestamp and capture reportable events. It's also because there are certain events that the regulations explicitly require to be timestamped within applications.
There are two sources of error in application timestamps:
- Host clock error. This is the difference between UTC and the time of the clock in the server or desktop where the app is running.
- Application-level error. This is error experienced by the application that has nothing to do with the accuracy of the clock. It is primarily the time for the system to assign a timestamp that has been requested--i.e., timestamp-assignment delay. Timestamp-assignment delay is caused in small part by the instructions necessary to obtain the timestamp, in larger part by things like cache misses, and in largest part by gross disruptions such as scheduling jitter in the operating system or garbage collection in a virtual machine. There are secondary components to application-level error such as resolution of the timestamp mechanism, which can be fairly coarse in virtualized environments; but timestamp-assignment delay still causes the biggest outliers.
I'll discuss host-clock error in a subsequent blog series. The focus of the present blog series is application-level error. The reason for focusing on this first is that it is often one of the biggest potential compliance issues with RTS 25, for reasons I will explain.
It is also one of the most misunderstood issues. When I discuss application-level error with people outside the STAC-TS Working Group, or come across vendor points of view, I often encounter certain myths that can be risky or expensive if they are the basis for a compliance strategy. So I thought it was worth examining them in a blog series. Along the way, I thought I'd also point out a couple mysteries around how MiFID 2 will be interpreted.
To support some of my points throughout the series, I've inserted data that summarize the application-level error characteristics of different system configurations. These measurements (STAC-TS.ALE benchmarks) are automatically created and analyzed by the STAC-TS toolset when run on a given system. The tools conduct tests under a variety of load conditions representing challenging cases in production and apply a conservative analysis to the results.
About the STAC Blog
STAC and members of the STAC community post blogs from time to time on issues related to technology selection, development, engineering, and operations in financial services.