Yes, ESMA, even MiFID 2 event logging needs clarity

Last week, the European Securities and Markets Authority (ESMA) said its MiFID II rules may take an extra year to implement. Steven Maijoor, ESMA's chair, said that "building of some complex IT systems can only really take off when the final details are firmly set in the [regulations]." I don't know which part of the 552-page regs he's referring to, but based on just two little sections called RTS 6 and RTS 25, I can see what he means.

These rules simply specify the records that trading organizations must keep about events that occur in the trading process, as well as the accuracy of the timestamps on those records. These aren't market-shaking rules like bond-trading transparency or research unbundling. But for the trading technologists who have to implement them—like the standing-room-only gathering at the STAC Summit in London last Tuesday—the new rules have big implications and raise even bigger questions. An interactive panel session chaired by Neil Horlock of Credit Suisse unearthed a number of them.

The rules

In brief, RTS 6 says that firms must report when trading instructions are exchanged with counterparties and when internal trading decisions are made, while RTS 25 specifies the amount that the timestamps on those records can diverge from Coordinated Universal Time (UTC). “High-frequency trading” events must be reported in microseconds to within +/- 100 microseconds of UTC. Other electronic trading events must be reported in milliseconds, accurate to +/- 1 millisecond. Manual trading can stay at 1-second timestamps, plus or minus a second.

These rules have an understandable motivation: regulators want more visibility into the decision process behind trades and to understand the sequence of events. The latter is essentially impossible with today’s second-by-second timestamps, which do nothing to sort out the millions of trading instructions that can flow within a single second in modern liquid markets. Shrinking the potential window of confusion to 200 microseconds doesn’t solve the problem (since there can still be hundreds or even thousands of events whose order cannot be determined), but it certainly helps.

The knowns

Some of the implications of the new rules are pretty well understood at this point:

  • Due to a couple of the rules, trading protocols that don't currently support microsecond timestamps, such as the pervasive FIX standard, need to be updated. This means that thousands of applications that use these protocols will need upgrades.
  • UTC-traceable time sync will need to be rolled out anywhere that it is not currently—from ultra low-latency apps to manual order entry—irrespective of the degree to which applications are allowed to diverge from UTC.
  • Many (most?) applications that fall into the 1-millisecond-accuracy bucket will need more accurate timing. In some cases, firms will need to upgrade their NTP infrastructure, while in others they will need to switch to PTP or proprietary software. Either way, migrating the menagerie of environments in an enterprise (everything from servers supporting Java applications to Windows desktops supporting Excel spreadsheets) will require considerable planning and operational risk management.
  • Verifying compliance will require new thinking and new systems. Validating one source of time requires another source that is more accurate. How frequently a firm validates synchronization for a given class of application should be backed up by empirical data. Infrastructures for time synchronization and event logging will require their own monitoring and alerting systems.

The unknowns

Those tasks would already be a lot to accomplish by the end of 2016 (the current deadline), but trading organizations are concerned by some big questions that could make it even harder:

  • Do we have to abandon GPS? UTC is a standard maintained by an organization of national labs like NPL in the UK and NIST in the US. Nearly all trading organizations today use GPS signals as their source for UTC, and RTS 25 appears to explicity allow this. However, the labs have been asserting that GPS is not traceable to UTC and has failed to stand up in court for that reason. Given that some of these labs are trying to sell their own commercial alternatives to GPS, one can question their impartiality. But the doubt has been sown. While some audience members at the STAC Summit vociferously defended GPS as a source of UTC, everyone seemed to agree that ultimately what matters is the view of regulators and the law. Denying GPS would require a mass migration of all trading systems to fiber-delivered time signals from national labs across Europe, whether those systems are at proximity/co-location centers or on the trading firm’s premises. In certain countries, a local UTC source might not even be available yet. This would be a big deal.
  • What is HFT? As described above, the accuracy requirements are most stringent for “high frequency trading” (HFT). According to some panelists, ESMA’s definition of HFT is still ambiguous and might encompass a wide range of trading applications that most of us would not otherwise consider HFT. (The FIA nicely summarized the debate earlier this year.) If so, an organization might need to tightly synchronize apps in diverse environments that are not subject to the high-performance engineering standards common in the latency-sensitive world—e.g., VB apps running in virtual machines, Java and .NET apps where timestamps can be thrown off by garbage collection or require upgrades/modifications just to report times to 6 decimal places, etc.). A broad definition of HFT would also probably imply a much wider need to capture messages by sniffing networks, another non-trivial logistical challenge (and non-trivial cost).
  • What is the real tolerance for error? By its nature, time synchronization is subject to both systematic error (clocks slowly drifting apart) and random error (things occasionally happening on one system or another that get in the way). Errors have statistical distributions that include outliers. As a consequence, engineers usually speak in terms of tolerances, acknowledging that perfection is either unachievable or not worth the cost. But RTS 25 is absolute. Most trading organizations would prefer that ESMA specify a tolerance that the industry can stick to (e.g., 99% of timestamps within the defined bands?) rather than sticking to an implied requirement for perfection and leaving it to the firms to decide how much non-compliance they can get away with.
  • Does ESMA really want a “Gods Eye” sequence number? As written, RTS 6 appears to ask for the impossible: assigning sequence numbers to all reportable events that occur throughout an institution, in the order that they occur. Explaining why I call this a “God’s Eye” scheme and why it’s effectively impossible for mere mortals requires a blog post of its own. However, the attendees at the STAC Summit seemed to understand the implications immediately, judging from the two reactions I saw when it came up: laughter and dropped jaws. It’s quite possible that the authors of RTS 6 intended something less ambitious, and there is indeed a minimalist way to interpret the current language (which also has minimal value). ESMA really needs to clarify what they’re trying to achieve.

These aren't the only questions around RTS 6 & 25, but the answers to those above have a major impact on the scope of the industry effort to comply. Until ESMA clarifies the rules, I can’t see how anyone can predict how long implementation will take.


About the STAC Blog

STAC and members of the STAC community post blogs from time to time on issues related to technology selection, development, engineering, and operations in financial services.

About the blogger