STAC Summit, 26 Jun 2017, London

STAC Summits

STAC Summits bring together industry leaders in architecture, app dev, infrastructure
engineering, and operational intelligence to discuss important technical challenges in
the finance industry. Come to hear leading ideas and exchange views with your peers.

DoubleTree by Hilton Hotel London - Tower of London
7 Pepys Street, London EC3N 4AF


Click on the session titles to view the slides (may require member permissions).


Big Compute Big Compute

Fast Compute Fast Compute

Big Data Big Data

Fast Data Fast Data


STAC Exchange Fast DataBig Data   Fast Compute   Big Compute   

Vendors with exhibit tables at the conference. (click here for the exhibitor list).

Reporting timestamp traceability from best-practice testsFast Data

Organizations that are subject to the timestamp-accuracy requirements of MiFID 2 must be able to demonstrate to regulators that they have used best practices to ensure compliance. Monitoring is part of this, but so is testing, which covers things that can't be monitored or that need to be prevented. ESMA have in fact said that "relevant and proportionate testing" is required. But how does a firm persuade a regulator that its testing has followed best practices? And how should it relate its testing back to the ultimate goal of timestamp traceability? The STAC-TS Working Group has been hard at work to answer these questions with industry standards and the tools to support them. Peter will provide a brief review of what it has delivered and what is coming.

Monitoring, testing, and best practices for MiFID 2Fast Data   

When we first started discussing MiFID 2 time compliance back in 2015, there was a debate over whether it was necessary to monitor time synchronization and supporting infrastructure on an ongoing basis. After all, the regulations seemed to indicate that an annual check would suffice. However, those days are long past. In its guidance last October, ESMA made clear that "relevant and proportionate monitoring should be required." So that leaves us with the question: what kind of monitoring is "relevant and proportionate"? With just six months before MiFID 2 goes into effect, firms need to start putting these systems and processes in place now. Our roundtable discussion--with interactive audience participation--will tackle some of the key questions, such as: under what circumstances will we need to provide traceability evidence to regulators? what should be monitored and how? what is the interplay between monitoring and testing? what data should be kept and what can be discarded? what are the best ways to tie monitoring and test data back to traceability? are there cases where it's acceptable to sacrifice accuracy for the sake of traceability?

Establishing a 10GbE timestamp referenceFast Data

The accuracy with which we can measure the latency of a network-attached device is limited by the relative accuracy of the network timestamps used to calculate the latency. Likewise, measuring the absolute accuracy of hardware or software timestamps relative to a reference like UTC or NIST requires a device with higher absolute accuracy than that of the subject timestamps. So trading firms are keen to have a reference network device that is more accurate than anything else in their environment. At the last STAC Summit, David described the bootstrap problem this presents: how do you measure the accuracy of the reference device? That is, what’s the reference for the reference? He then outlined a methodology for measuring the absolute accuracy of network timestamps to the nanosecond. However, the only results he provided were for timestamps on 1GbE packets. Achieving this level of accuracy for 10GbE—and measuring it throughout the second between pulses of PPS discipline—is a more challenging class of problem. In this talk, David will describe such a methodology—vetted and enhanced by the STAC-TS Working Group—and reveal its first results.

Innovation RoundupFast Data
  "Resilient Timing Solutions integrating Satelles STL and BroadShield"
    Jean-Arnold Chenilleau, Applications Engineer, Spectracom
  "Monitor and Audit Time in the OS"
    Steve Newcombe, Account Manager, Chronos
  "Going with the Flow"
    Roger Harris, Senior Sales Engineer, Napatech
  "100G monitoring: More Bandwidth, New Challenges"
    Alessandro Lucchese, Senior Solutions Engineer, cPacket
  "New, lower-cost, higher-performance trade data recording and playback options from Endace"
    James Barrett, Senior Director EMEA Sales, Endace


Picosecond scale measurements for nanosecond scale tradingFast Data

Across liquid markets, latency and jitter continue to shrink. This means that the differences in latency that can have a material effect on P&L continue to shrink as well. When it’s possible to get from tick to trade in a few hundred nanoseconds, firms start to care about every nanosecond in their information pathways. But one of the challenges in squeezing out nanoseconds is measuring them. For measurement to stay ahead of the measured, we must enter the world of picoseconds. Matthew spends a lot of time in this world, and according to him, it is a strange and interesting place. In this talk, he’ll take us all on a brief tour of the challenges, pitfalls and some unexpected findings along the way.

STAC Update: Fast DataFast Data   

Peter will discuss the latest research and Council activities related to low-latency/high-throughput workloads.

Innovation RoundupFast Data
  "Metamako platforms: Robust, flexible, controllable."
    Dr. David Snowdon, CTO, Metamako
  "Product Update: NovaLink 30% latency drop, New feeds and features"
    Luc Burgun, CEO, NovaSparks
  “A deeper look into the network”
    Richard Hastie, Sr Sales Director, Business Development, Mellanox
  "Solarflare Updates"
    David Riddoch, Chief Architect, Solarflare
  "OpenShift in Financial Services"
    Chris Milsted, Principal Solution Architect, Red Hat


STAC Update: Big ComputeBig Compute   

Peter will discuss the latest research and Council activities in compute-intensive workloads.

Using Big Memory to accelerate Monte CarloBig Compute

Pricing and computing the risk of non-linear instruments like derivatives frequently involves Monte Carlo simulations that are very compute intensive and time consuming. But business groups continue to ask for faster turnaround on pricing and risk—in fact, many would like it in real time if they could get it. As part of HPE Labs, Natalia leads a team that is focused on accelerating Monte Carlo simulations for financial workloads. According to Natalia, clever use of large-memory systems can accelerate simulations by up to 10,000x—that is, accomplishing what normally takes hours in mere seconds. Driving down the time per simulation could provide managers with pricing and risk data in real-time, enabling them to evaluate more scenarios and make better investment decisions. Natalia will outline the HPE Labs approach and provide preliminary data to back it up.

Optimizing the cost of throughput for financial computeBig Compute

Over the years, STAC-A2 benchmarks have seen a variety of compute architectures compete fiercely for the crown of the fastest platform for option pricing. The flip side of that blazing speed is the cost of pricing a portfolio of options—that, is the cost of throughput. Though not a new concern, cost of throughput is coming into even sharper focus as regulations like FRTB promise to require a significant increase in risk calculations, probably at a pace that far exceeds Moore's Law. In this talk, Oleg will focus on practical considerations that help to drive down the cost of throughput for option pricing.

STAC Update: Big DataBig Data   

Peter will discuss the latest benchmark results involving big data workloads, including tick analytics, resource management in multi-tenant clusters, and optimizing Spark performance.

The brave new world of big I/OBig Data

Analytic demands in finance are putting tremendous pressure on data infrastructures, whether its quants and AI programs searching for optimal trading and investment strategies or risk systems generating all the scenarios required by new regulations. At the same time, there’s momentous change on the supply side. The economics of flash and interconnects make it possible in theory to achieve leaps in both performance and performance per dollar (current shortages notwithstanding). With these trends clearly in sight, a new generation of innovators has begun to bring products to market that re-think how data is stored and accessed and how much that should cost. Their points of view have implications for how application architects think about data access. In this panel, some of those innovators will share those views.

Innovation RoundupBig Data
  “Realtime Persistent Computing on Google Cloud and Intel”
    Matt Meinel, Senior Director, Business Development, Financial Services, Levyx
  "Quasardb: Time series at scale."
    Gilles Tourpe, Sales, quasardb
  "Bringing STAC-M3 to Life: Kx for DaaS and Thomson Reuters' VA8"
    James Corcoran, Head of Engineering, EMEA, Kx Systems


How to accelerate backtests by leveraging the open source MPI frameworkBig Data

As trading firms face increasing pressure to make their algorithms smarter, some of them are rethinking the architectures they use for developing and backtesting those algorithms. In general, the requirement is to distribute these big data problems across many compute nodes and to read and write data as efficiently as possible. While firms continue to have the option of completely hand-crafting a framework or using proprietary products, they are increasingly considering open source components as well. Gerd will argue that open source technologies from the HPC community—starting with MPI—should be top of the list. With 25 years behind it, MPI has evolved the functionality and compatibility that Gerd believes makes it an ideal foundation for a modern backtesting architecture. He will illustrate an MPI-based approach with an example taken from a STAC workload.





About STAC Events & Meetings

STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in finance.