STAC Summit, 7 May 2018, Chicago

STAC Summits

STAC Summits bring together industry leaders in architecture, app dev, infrastructure
engineering, and operational intelligence to discuss important technical challenges in
the finance industry. Come to hear leading ideas and exchange views with your peers.

The Metropolitan Club
Willis Tower
233 South Wacker Drive, 66th Floor, Chicago


Click on the session titles to view the slides (may require member permissions).


Big Compute Big Compute

Fast Compute Fast Compute

Big Data Big Data

Fast Data Fast Data


Making data science pay: Mastering the challenges of analytics operationsBig Data   

Many asset managers, hedge funds, brokers, and other financial firms are under pressure today to improve profitability through better application of technology to data. Some of them are looking to extract value from new kinds of information, while others are more focused on improving the yield from data they already have. But all of them are turning to data science and its most intriguing subset, machine learning, to analyze these datasets. However, many enterprises struggle to derive business value from these analytics, irrespective of the scale of their data science investment. Putting the models into production so that they feed either human or automated decision systems is typically slow, ad-hoc, and expensive. Once in place, all too often the data quality is poor, the operational support is inadequate, and the model success decays over time. As a former quant who learned the hard way how to build effective analytics operations, Michel believes that CTOs and Heads of Research need to treat data-driven analytics as an industrial process. This does not mean squashing the creativity out of data scientists. On the contrary, Michel will argue that the right technology frameworks and end-to-end processes can liberate the creative energy of data scientists while maximizing the value they deliver to the business. In this talk, he will make his case.

Oops, that’s our infrastructure? How to do AI at scaleBig Data   Big Compute   

So you’ve decided to invest in AI. Great. You’ve hired some bright data scientists. Check. And you buy them each a machine with a GPU, or maybe you let them spin up an instance in a public cloud. Fine. Now how do you enable them to scale beyond a single node? And how do you do it in a way that allows several data scientists with highly iterative AI workflows to share compute resources and large datasets? And how will they leverage both cloud and on-prem resources? According to Eric, these issues crop up very quickly when firms head down the AI route. And the most common solutions to these challenges only solve part of the problems, slow down the firm’s AI effort, and impose a high total cost. In this talk, Eric will argue for a different way of approaching scalable AI that requires less manpower and infrastructure while providing the firm with more agility.

Using FPGA for financial analytics: Has the programmability nut been cracked?Big Data   Big Compute   

Field programmable gate arrays (FPGA) have long been used for ultra-low latency processing of network packets, such as parsing market data or sending trade-execution instructions. But the massive parallelism, high power efficiency, and growing memory capacity of FPGA technology have also held out promise for more compute-intensive workloads such as risk management, backtesting of complex trading strategies, and artificial intelligence. The main obstacle for FPGA in these areas has been development: programming the hardware has been a slow process requiring highly specialized skills. The FPGA ecosystem has tried many ways to make FPGA acceleration accessible to traditional software developers, but none has caught on to date. Is that about to change? In this panel, we will review the business requirements that FPGA solutions must meet, the latest ways to enable software developers to offload analytics to FPGA accelerators, and some of the key design considerations to get maximum performance from such applications.

STAC Update: Big WorkloadsBig Data   

Peter will discuss the latest benchmark results involving big workloads such as tick analytics and backtesting and discuss new benchmarks that combine big data with big compute.

Innovation RoundupBig Data
  "Solving the market data ingestion problem"
    Edouard Alligand, CEO, QuasarDB
  "Profit from The Hyperscalers: Build Cloud-scale Storage using Quobyte and Commodity Hardware"
    Björn Kolbeck, co-founder/CEO, Quobyte
  "Lenovo FSI Innovations"
    Dave Weber, Wall Street CTO & Director, Lenovo


How to make best use of leading non-volatile memory technologiesBig Data   

The non-volatile memory landscape has changed dramatically from a few years ago, with new offerings spreading out to occupy very different points along the axes of density, performance, and cost. What are the best ways to use these new offerings to meet business objectives? In this talk, Mark will discuss what we can learn about the answers these questions from test results and use cases in the field.

The STAC Cloud SIGFast DataBig Data   Fast Compute   Big Compute   

Increasing the use of public, private, or hybrid clouds is high on the agendas of many financial firms. However, when making cloud decisions, these firms face a number of questions and obstacles in areas like security, price-performance, and functionality. The new STAC Cloud Special Interest Group (SIG) is a group of financial firms and vendors that has set out to standardize methods of assessing cloud solutions, facilitate dialog and best practices, and guide a testing program. Peter will explain what it’s all about.

STAC Update: Time SyncFast Data

Peter will provide the latest information regarding STAC-TS tools and research in the area of time synchronization, timestamping, and event capture.

Nano-level synchronization at large scale: A case studyFast Data

As latency and jitter continue to decrease, both market participants and market centers need increasingly accurate time synchronization. And many of them need accuracy at large scale, such as exchanges that must ensure fairness while executing orders that can arrive nanoseconds apart at hundreds of exchange gateways. However, achieving this is difficult. Standard PTP is scalable but not nanosecond accurate. PPS can support nanosecond-level accuracy but does not scale. According to Francisco, other industries have found an emerging technology called White Rabbit (based on PTP and SyncE) to provide nanosecond-level accuracy on a large scale. In this talk, Francisco will describe how Deutsche Boerse deployed White Rabbit to improve its insight into order arrival and end-to-end execution time for different algorithms across an entire datacenter.

Innovation RoundupFast Data
  “EndaceProbe as a Platform for Engineering Teams”
    Tom Leahy, Senior Sales Engineer, Endace
  "Metamako - 5 years of innovation in 5 mins"
    Dr David Snowdon, Founder & CTO, Metamako
  “Bigger, Badder, Better!”
    Davor Frank, Senior Solutions Architect, Solarflare Communications


Innovation RoundupFast Data
  "Product Updates -Tick to Trade and Microwave"
    Arnaud Lesourd, Senior Application Engineer, NovaSparks
  "Enyx Product Update"
    Laurent de Barry, Co-founder & Chief Sales Officer, Enyx
  "Advanced FPGA technologies in trading: latest products from LDA."
    Vahan Sardaryan, Co-Founder and CEO, LDA Technologies


The Big MAC Mystery: What is a MAC and how do you measure it?Fast Data   

One of the most interesting recent developments in the latency race has been the growth of an end-user market for Medium Access Controller (MAC) products. For most of the history of networks, the MAC was a layer of functionality buried deep in network devices, far from the concern or scrutiny of the application developer. However, as more trading firms move their trading logic from software into FPGA-powered network hardware, a number of vendors have begun to expose their MAC logic as FPGA IP cores for sale. This has led to a problem that is common in nascent markets: significant confusion around product definition, differentiation, and performance claims. That is, vendors are offering different functionality under the MAC banner, accompanied by significantly different performance claims. In this talk, Matthew will propose a precise definition of the minimum feature set of a 10Gb/s Ethernet MAC, along with a range of potential methodologies to accurately and consistently measure MAC latency.

Why your transatlantic trades are getting picked offFast Data   

While enjoying his garden leave from a Chicago HFT shop, Bob Van Valzah sometimes likes to go on long bike rides in the Chicago suburbs. During one of these trips, he recently discovered some mysterious radio towers in an industrial park, which led him to do some detective work. His strong conclusion: someone is operating a transatlantic radio link for HFT. In this talk, Bob will offer up the photos, filings, and other evidence so you can draw your own conclusions.

About STAC Events & Meetings

STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in finance.