STAC Summit, 20 Jun 2017, Chicago

STAC Summits

STAC Summits bring together industry leaders in architecture, app dev, infrastructure
engineering, and operational intelligence to discuss important technical challenges in
the finance industry. Come to hear leading ideas and exchange views with your peers.

The Metropolitan Club, 233 South Wacker Drive, 66th Floor, Chicago
Willis Tower



Click on the session titles to view the slides (may require member permissions).


Big Compute Big Compute

Fast Compute Fast Compute

Big Data Big Data

Fast Data Fast Data


STAC Exchange Fast DataBig Data   Fast Compute   Big Compute   

Vendors with exhibit tables at the conference. (click here for the exhibitor list).

STAC Update: Big ComputeBig Compute   

Peter will discuss the latest research and Council activities in compute-intensive workloads.

Getting the most from CPU-based tradingFast Data   Fast Compute   

Despite the growth of FPGAs in trading, a very large portion of latency-sensitive processing still happens on CPUs. And while CPU manufacturers busily harness Moore's Law to increase core counts, the thing that high-speed trading firms care most about is the speed of those cores, which has been increasing glacially—if at all. As a result, it is now common for trading firms to run overclocked systems. After reviewing some of the leading high-speed system offerings, we’ll discuss the state of the art. What are the limits today, and what does the future hold? What tradeoffs ensue from the choices facing customers (core counts, form factors, air or liquid cooling, tunable or locked-down systems, desktop or server CPUs, etc.)? What are most customers opting for today, and where do things seem to be trending?

How to accelerate backtests by leveraging the open source MPI frameworkBig Data

As trading firms face increasing pressure to make their algorithms smarter, some of them are rethinking the architectures they use for developing and backtesting those algorithms. In general, the requirement is to distribute these big data problems across many compute nodes and to read and write data as efficiently as possible. While firms continue to have the option of completely hand-crafting a framework or using proprietary products, they are increasingly considering open source components as well. Gerd will argue that open source technologies from the HPC community—starting with MPI—should be top of the list. With 25 years behind it, MPI has evolved the functionality and compatibility that Gerd believes makes it an ideal foundation for a modern backtesting architecture. He will illustrate an MPI-based approach with an example taken from a STAC workload.

STAC Update: Big DataBig Data   

Peter will discuss the latest benchmark results involving big data workloads, including tick analytics, resource management in multi-tenant clusters, and optimizing Spark performance.

The brave new world of big I/OBig Data

Analytic demands in finance are putting tremendous pressure on data infrastructures, whether its quants and AI programs searching for optimal trading and investment strategies or risk systems generating all the scenarios required by new regulations. At the same time, there’s momentous change on the supply side. The economics of flash and interconnects make it possible in theory to achieve leaps in both performance and performance per dollar (current shortages notwithstanding). With these trends clearly in sight, a new generation of innovators has begun to bring products to market that re-think how data is stored and accessed and how much that should cost. Their points of view have implications for how application architects think about data access. In this panel, some of those innovators will share those views.

Innovation RoundupBig Data
  “How to Predict Yesterday’s Weather with 100% Accuracy”
    Ike Nassi, Chief Technology Officer, TidalScale
  “Realtime Persistent Computing on Google Cloud and Intel”
    Matt Meinel, Senior Director, Business Development, Financial Services, Levyx
  "Bringing STAC-M3 to Life: Kx for DaaS and Thomson Reuters' VA8"
    Paul Grainger, Senior KDB+ developer, Kx Systems


Innovation RoundupFast Data
  "Metamako platforms: Robust, flexible, controllable."
    Dr. David Snowdon, CTO, Metamako
  "Product Update: NovaLink 30% latency drop, New feeds and features"
    Olivier Baetz, COO, NovaSparks
  “A deeper look into the network”
    Bill Webb, Director, Ethernet Switch Sales, Mellanox
  "Beyond Latency."
    Laurent de Barry, co-Founder and Trading Products Manager, Enyx


STAC Update: Fast DataFast Data   

Peter will discuss the latest research and Council activities related to low-latency/high-throughput workloads.

Traceability reporting for timestampsFast Data

Organizations that are subject to timestamp-accuracy regulations such as Europe's MiFID 2 regulations (which apply to many US firms) and the CAT NMS plan must be able to demonstrate to regulators that they have used best practices to ensure compliance. Monitoring is part of this, but so is testing, which covers things that can't be monitored or that need to be prevented. The European regulators have in fact said that "relevant and proportionate testing" is required. But how does a firm persuade a regulator that its testing has followed best practices? And how should it relate its testing back to the ultimate goal of timestamp traceability? The STAC-TS Working Group has been hard at work to answer these questions with industry standards and the tools to support them. Peter will provide a brief review of what it has delivered and what is coming.

Establishing a 10GbE timestamp referenceFast Data

The accuracy with which we can measure the latency of a network-attached device is limited by the relative accuracy of the network timestamps used to calculate the latency. Likewise, measuring the absolute accuracy of hardware or software timestamps relative to a reference like UTC or NIST requires a device with higher absolute accuracy than that of the subject timestamps. So trading firms are keen to have a reference network device that is more accurate than anything else in their environment. At the last STAC Summit, David described the bootstrap problem this presents: how do you measure the accuracy of the reference device? That is, what’s the reference for the reference? He then outlined a methodology for measuring the absolute accuracy of network timestamps to the nanosecond. However, the only results he provided were for timestamps on 1GbE packets. Achieving this level of accuracy for 10GbE—and measuring it throughout the second between pulses of PPS discipline—is a more challenging class of problem. In this talk, David will describe such a methodology—vetted and enhanced by the STAC-TS Working Group—and reveal its first results.

Innovation RoundupFast Data
  "New, lower-cost, higher-performance trade data recording and playback options from Endace"
    Tom Leahy, Senior Sales Engineer, Endace
  "Going with the Flow"
    Roger Harris, Senior Sales Engineer, Napatech
  "Solarflare Updates"
    Davor Frank, Senior Solutions Architect, Solarflare Communications
  "Resilient Timing Solutions integrating Satelles STL and BroadShield"
    John Fischer, CTO, Spectracom


Picosecond scale measurements for nanosecond scale tradingFast Data

Across liquid markets, latency and jitter continue to shrink. This means that the differences in latency that can have a material effect on P&L continue to shrink as well. When it’s possible to get from tick to trade in a few hundred nanoseconds, firms start to care about every nanosecond in their information pathways. But one of the challenges in squeezing out nanoseconds is measuring them. For measurement to stay ahead of the measured, we must enter the world of picoseconds. Matthew spends a lot of time in this world, and according to him, it is a strange and interesting place. In this talk, he’ll take us all on a brief tour of the challenges, pitfalls and some unexpected findings along the way.

Demystifying FPGA design and functional verification in the age of digital intelligenceFast Data

The number of capital markets firms developing FPGA-based solutions is growing. So is the pool of firms considering shifting at least some of their business logic to FPGA. As this trend continues, it’s helpful for technology managers to have as much context as possible about best practices in FPGA development, including both technical and human factors. Those who have not started down the FPGA path may wonder what it really entails. Those already neck deep in FPGA development may wonder how their experiences compare to those of their peers. As an organization that serves a broad swathe of the FPGA developer community, Mentor Graphics has a lot of hard data on these questions, in finance and many other industries. In this talk, Ted will share some of these findings. How are firms maximizing their FPGA development productivity? How do other FPGA shops look in terms of skillsets, team dynamics, technologies, and methodologies? Come to hear the answers and ask questions of your own.





About STAC Events & Meetings

STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in finance.