STAC Summit, 27 Oct 2016, London

STAC Benchmark Council Logo

WHEN
Thursday, 27 October 2016

WHERE
America Square Conference Center
17 Crosswall, London EC3N 2LB

Agenda

Click on the session titles to view the slides (may require member permissions).

 

Big Compute Big Compute

Big Data Big Data

Fast Data Fast Data

 



STAC ExchangeBig Compute    Big Data    Fast Data
 

Vendors with exhibit tables at the conference. (click here for the exhibitor list).


Welcoming remarks

 

Stories from the PTP battlefrontFast Data
 

As trading firms and exchanges pursue compliance with MiFID 2, a number of them plan to use IEEE 1588 (PTP) for synchronizing devices and hosts in their enterprises. What can they learn from firms that have already been down that road? On many client sites, Corvil’s monitoring appliances are consumers of PTP synchronization services. As a consequence, Corvil has experienced the good, bad, and ugly of real PTP deployments. In this talk, James will offer several observations on pitfalls and best practices with PTP in the real world. PTP promises sub-microsecond accuracy, but according to James, there's plenty that can go wrong in practice. Accuracy issues often go undetected for months and have all sorts of causes, from architecture and deployment weaknesses to PTP misconfigurations, insufficient diagnostics, and even bugs in switches. James will discuss a number of these problems and how they were diagnosed and solved.

Putting time-sync standards to useFast Data
 

STAC-TS is a set of standards for testing time-stamping and event-capture solutions. Beginning this year, trading firms will be able to use STAC-TS results to demonstrate traceability in their system designs for regulations like MiFID 2, as well as to evaluate gear for monitoring ultra-low-latency information flows, where accuracy requirements are typically much more stringent. A group of experts from trading firms and vendors has been developing these standards and the supporting tools for the past several months. In this talk, Peter will provide an overview of the methodologies covered by the standards, the tools that are becoming available, and the emerging testing agenda.

Innovation RoundupFast Data
  "Timing Systems and GNSS Vulnerabilities"
    Jean-Arnold Chenilleau, Application Engineer, Spectracom
  "Financial Trading Demands Next-Generation Performance Monitoring"
    Alessandro Lucchese, Senior Solutions Engineer, cPacket
  “Napatech’s 2 x 40GbE Full Packet Capture Solution”
    Michael Wright, Business Development Manager, Napatech
  "How Provenance can make MiFID 2 timing compliance easy"
    Rob Earley, Senior Pre-Sales Engineer, EMEA and India, Endace

 

What's the reference for the reference? Proving timestamp accuracy to the nanosecondFast Data
 

Ultra-low latency trading firms and the exchanges that cater to them increasingly care about time measurements in nanoseconds, and vendors of network-timestamping devices often claim accuracy in the tens or even single-digit nanos. But how can one independently verify those claims? A basic principle is that to measure the accuracy of one thing, you need something more accurate with which to measure it. In this talk, David will explain a methodology within STAC-TS that Metamako has spearheaded, in which the accuracy of a device's timestamps on 1Gbps or 10Gbps Ethernet can be determined to a single nanosecond.

Innovation RoundupFast Data
  "Solarflare Sets the Latency Bar"
    David Riddoch, Chief Architect, Solarflare
  "Low latency infrastructure for 25GbE and above"
    Asaf Wachtel, Sr. Director, Business Development, Mellanox Technologies
  "Low latency networking, time synchronization and packet capture using Exablaze NICs and switches"
    Matthew Grosvenor, Staff Engineer, Exablaze
  "Product update: always better performance and broader coverage!"
    Yves Charles, VP Business Development, NovaSparks

 

Realtime algorithms – New approaches and new benchmarks Fast Data
 

Many markets today rely on fast computation of prices, sensitivities, and other analytics in response to new market information. In fact, given the extent to which most firms have already squeezed latency out of I/O, speedups in trading algorithms themselves and the calculation engines that support them can often provide the biggest payoffs. Some techniques for acceleration rely on offloading calculations to a co-processor. In this talk, Stephen will review the tradeoffs of these techniques and offer his point of view on the most promising approach (hint: it involves FPGA). Along the way, he will discuss how Intel is approaching the integration of FPGA not only at a hardware level but also at a developer level. Peter will then close out the talk by discussing a proposed technology-agnostic STAC Benchmark framework for real-time algorithmic solutions implemented in hardware or software.

Using FPGA to accelerate trading strategy developmentBig Compute
 

In the effort to develop winning algos more quickly, many trading firms utilize genetic programs, which iteratively mutate strategies, assess their fitness, and select survivors for reproduction and further evolution. The bottleneck in this process is often evaluating strategies via the fitness function, a very compute-intensive process. Ingrid's group at Imperial College investigated the use of FPGAs to accelerate fitness evaluation. Their pipelined design made use of massive parallelism on chip to evaluate the fitness of multiple genetic programs simultaneously. On both synthetic and historical market data, this design outperformed non-FPGA approaches by over 20x. In this talk, Ingrid will use an example from the FX market to explain the use case, the solution design, and the tests they performed with it.

STAC Update: Big ComputeBig Compute   
 

Peter will present the latest STAC-A2 benchmark results and explain how the benchmark suite has been enhanced to improve insight into performance and power/space efficiency.

STAC-A2 on Intel: The story behind the numbersBig Compute   
 

Evgeny will review what it takes to take full advantage of the latest architectures.

Convolutional neural nets for financial marketsBig Compute   Big Data
 

Convolutional neural networks are a type of deep learning that have proven very successful in tasks such as image processing, recommendation systems, and beating humans in the game of Go. Can ConvNets help in trading? In this talk, Stephen will describe work at Imperial College on applications of ConvNets to financial markets, including how to use reconfigurable hardware to make ConvNets faster and more economical.

STAC Update: Big DataBig Data   
 

Peter will review the latest benchmark results involving big data workloads.

It’s the infrastructure, stupid: How to process big data fasterBig Data
 

Financial markets have had “big data” challenges since long before the term entered the hype curve. While most of today’s hype surrounds open source data-management software that is causing heartburn in the traditional database industry, there’s another less visible revolution going on in infrastructure. This includes major advances in hardware such as non-volatile media and interconnects, as well as software layers that enable applications to take better advantage of this new hardware (as well as existing kit). STAC has brought together a panel of innovators, each of whom has a unique angle on the problem, but all of whom believe that firms can achieve performance that was either impossible or too expensive until now. Our panelists will debate the pros and cons of these new approaches to data-intensive workloads. To kick off, each will provide a short presentation:

  "Architectures for Accelerating IO with Flash for Key Financial Services"
    James Coomer
  "Time Series Analytics at Scale"
    Anand Bisen
  "Using Flash to attain Ultra-low latency, Hi-performance for Hybrid Transactional/Analytical Processing"
    Bernie Wu

 


PLATINUM SPONSOR


GOLD SPONSORS

MetamakoCorvil


DDN Storage



MEDIA PARTNER

About STAC Events & Meetings

STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in finance.