STAC Summit, 13 June 2016, NYC

STAC Benchmark Council Logo

Monday, 13 June 2016

Grand Hyatt New York, Park Avenue at Grand Central, New York
Manhattan Ballroom



Click on the session titles to view the slides and videos (may require member permissions).


Big Compute Big Compute

Big Data Big Data

Fast Data Fast Data


STAC ExchangeFast Data   Big Data   Big Compute

Vendors with exhibit tables at the conference (click here for the exhibitor list).

STAC Update: Big ComputeBig Compute

Peter will review benchmark results, research in progress, and working group activities for STAC-A2, the computational finance standard for testing the latest high-performance technologies.

Knights Landing Lands at STACBig Compute

The next generation of Intel Xeon Phi—code named Knights Landing or KNL—is not yet released, but STAC has been able to subject a KNL system to STAC-A2, the standard benchmark suite based on financial market risk analysis. Manufactured with a 14-nanometer process, KNL is the first Xeon Phi co-processor that fits into a CPU socket and can run as the host processor with direct access to host memory. After David explains a bit about how KNL works, Peter will pry open the STAC Vault to give the audience a sneak peek at the KNL STAC-A2 results.

STAC Update: Big DataBig Data

Peter will review the latest benchmarking work on big data, such as:

  • STAC-M3 for enterprise tick analytics; and
  • STAC-A3 results for backtesting infrastructure;
  • ongoing work with Apache Spark.

Innovation RoundupBig Data   Big Compute
  "Enhancing C++ Application Performance with Intel Code Modernization"
    Brian Caball, Data Scientist, First Derivatives
  "STAC M3 Kanaga: Time Series Data Analytics Get Really Interesting at 31TB"
    Terry Keene, CEO, Integration Systems
  "Avere - Enabling Cloud Compute Bursting in Quantitive Finance"
    Pieter Fountain, Regional Manager, Avere Systems
  "Your applications are precious, your database isn't: re-platforming made easy"
    Mike Waas, Founder & CEO, Datometry


Distributed time series analysis in SparkBig Data

Analyzing time series data at scale is critical for trading. This session introduces Huohua, Two Sigma’s implementation of highly optimized time series operations in Apache Spark. Huohua performs truly parallel and rich analyses on tick data by taking advantage of the data’s natural ordering to provide locality-based optimizations. Huohua is an open source library for Spark based around the OrderedRDD, a time series aware data structure, and a collection of time series utility and analysis functions that use OrderedRDDs. Unlike DataFrames and Datasets, Huohua’s OrderedRDDs can leverage the existing ordering properties of datasets at rest and the fact that almost all data manipulations and analysis over these datasets respect their ordering properties. It differs from other time series efforts in Spark in its ability to efficiently compute across panel data or on large scale high frequency data. In this talk, Wenbo will present the architecture of OrderedRDDs and its integration with Spark SQL, DataFrames, and Datasets; and the analysis tools that Two Sigma has built on top to merge, join, aggregate, and intervalize data, and compute windowed, rolling, and cycle-based summarizations and cross-panel analysis. He’ll also present results comparing Huohua to similar operations using off the shelf RDDs and DataFrames.

The data center is the computer, but is it time for an upgrade?Big Data   Big Compute

Several years ago engineers at Google popularized the notion that the "data center is the computer", arguing that an Internet giant--and soon other industries--would stop thinking of a pizza box as the unit of compute and instead think of the data center itself as a warehouse-scale computer. That has been the case for some time in finance, where workloads like risk analysis and strategy backtesting require orchestration of large compute and storage resources. Adoption of big data technologies for workloads like anomaly detection has increased the application scope. How has the computing picture changed in the last few years? How well is the IT industry delivering on the promise of data-center computing? What kind of hardware innovations are making it more compelling, and what is still needed? How about the software industry? We'll dig into these topics with a few experts who have more than an academic interest in them.

Test standards for time sync and event captureFast Data

The STAC-TS Working Group is defining standards trading firms can use to vet event timestamping and capture solutions. Peter will provide an overview of the emerging standards and how the trading community can benefit.

Time synchronization in multi-site architecturesFast Data

In order to comply with time-synchronization requirements driven by regulations such as MiFID 2 or business needs such as latency monitoring, many firms are turning to the Precision Time Protocol (PTP) to synchronize clocks within a data center. Can this approach be extended across a wide area? Many enterprises have multiple sites. Do they need redundant sources of UTC in each data center, such as GPS-fed grand master clocks, or can the UTC source in one data center be leveraged in other sites using PTP over the WAN or MAN? How does one certify synchronization across sites? In this talk, Stephen will provide data from tests Endace that has conducted with PTP over wide-area networks and discuss the implications for multi-site architectures.

Capture and analysis of network data: What's changed and what hasn't?   Fast Data   Big Data

Many firms today use wire-capture data to support trading revenue and reliability through latency and health monitoring. Many of their capture and analysis solutions have been in place for several years. How have the needs of low-latency trading firms and exchanges changed in that time? How well are solutions keeping pace with those changing needs? What role can network captures play in compliance with MiFID 2 requirements for precisely timestamped event capture, which probably affects many US firms? Are there opportunities to leverage network data for purposes beyond infrastructure management? We'll get the views of some key opinion leaders in this space.

STAC Update: Network I/OFast Data   Big Data

Peter will explain the latest developments related to the STAC Network I/O Special Interest Group.

Innovation Roundup Fast Data
  "High Performance Networking in Increasingly Regulated Environments"
    Eitan Rabin, Sr Director Financial Services Software, Mellanox
  "Meeting FINRA and MIFID Regulations with High Accuracy NTP"
    John Fischer, Chief Technology Officer, Spectracom
  "Low latency networking, time synchronization and packet capture using Exablaze NICs and switches"
    Peter von Konigsmark, Director of Engineering, Exablaze
  "Introducing NovaLink"
    Olivier Baetz, COO, NovaSparks
  "8000 Series"
   Davor Frank, Senior Solutions Architect, Solarflare
  "FPGA Financial Solutions"
   Stephen Weston, Principal Engineer, Intel


The latency race is just getting startedFast Data   

Computing hardware has played a central role in the evolution of low-latency trading for over a decade. Initially the race was largely about CPU clockspeeds. When clocks more or less hit a ceiling, the game shifted to how to exploit additional cores and cache memory within a CPU's power envelope, how to migrate certain functionality to FPGA, and how to exploit SSDs. In this talk, David will argue that four trends in the hardware industry are starting to affect low-latency trading today in important ways and will do so more in the near future: new processor capabilities, shrinking transistors for FPGA, tighter CPU/FPGA integration, and post-NAND solid-state storage technologies.

Giving HFT a voice in C++Fast Data

Despite its extensive use of C++, the traditionally secretive HFT industry hasn't had much influence on the direction of the programming language itself. Is that about to change? The ISO C++ committee's study group SG14 has recently begun reaching out to HFT developers to elicit their input to C++ standards, and some are getting involved. As head of SG14, Michael will explain how the process works, what kind of enhancements are under consideration, and how you can get involved.

Talks with the creator of C++Fast Data   Big Data   Big Compute

Since Bjarne Stroustrup invented C++ in 1978, he has led it to become an indispensable tool for millions of programmers and a key component of today's critical infrastructure. He is still actively helping to drive the future of the language through the ISO C++ committee, and actively uses the language in tackling large-scale problems at Morgan Stanley. Few sectors have more money riding on C++ than the trading industry, which uses it to absorb information, understand risk, and place trades throughout the capital markets. And few groups in the trading industry have more devotees of C++ than the STAC community. So we have invited Bjarne to be the finale keynote speaker for this STAC Summit.


No littering!

How to write completely resource safe and leak free code without limitations, overheads, or a garbage collector.


Q & A on C++

Bjarne will take questions from the audience on anything related to C++.






Integration Systems



About STAC Events & Meetings

STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment.