Application-level error blog - Page 9

Epilogue

The STAC-TS Working Group views testing of application-level error as a crucial part of demonstrating ongoing compliance. This is not only because testing is the way of seeing potential potholes in the road ahead (whereas monitoring is a rear-view mirror). It’s also because it’s not feasible to monitor application-level error. The application-level error in a given timestamp is unknown; otherwise you could correct the timestamp. Moreover, monitoring via sampling is intrusive and yields either too many false negatives or too many false positives. The STAC-TS philosophy is therefore to determine the statistical distribution of application-level error of a given platform via testing under conditions at least as bad as production, and to re-test any time the platform configuration changes in any way.

However, as I mentioned at the outset, application-level error is only part of the total error in application timestamps. Demonstrating that a given platform complies with RTS 25 requires combining STAC-TS.ALE benchmarks with STAC-TS.CE benchmarks that characterize the host clock error in steady state and holdover situations, or using a methodology such as STAC-TS.AVN (application versus network), which measures total application-timestamp error holistically, including both application-level error and host-clock error.

Nevertheless, STAC-TS.ALE can be quite useful on its own as a quick way to rule out particular platforms or identify those needing remediation, without requiring time sync software or any test equipment like capture devices or oscilloscopes.

While low application-level error is not sufficient to comply with MiFID 2, it is necessary.

<Prev: MYSTERY #2 - Will regulators accept a skew-adjusted approach to application-timestamp error?>