STAC Summit, 29 Oct 2019, Chicago
STAC Summits bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment.
Come to hear leading ideas and exchange views with your peers.
WHERE
The Metropolitan Club
Willis Tower
233 South Wacker Drive, 66th Floor, Chicago
Agenda
Click on the session titles to view the slides.
 
Big Compute
Fast Compute
Big Data
Fast Data
 
STAC Update: Big Compute | |
Michel will discuss the latest research and activities in compute-intensive workloads such as deep learning and derivatives risk. |
Why a single C++ API makes sense for heterogeneous compute infrastructure | |
The future of computing in finance certainly seems heterogeneous. It’s a fair bet that in the coming years, optimizing the latency, throughput, and cost efficiency of a given workload will increasingly require some combination of scalar (CPU), vector (GPU), matrix (AI) and spatial (FPGA) processors. These architectures require an efficient software programming model to deliver performance. As we often discuss at STAC, high-level languages like Python or frameworks like Spark make it relatively easy to deal with this diversity, since they allow for highly optimized platform-specific libraries under the covers. But what about programs written in C++? Many performance-obsessed programmers prefer C++ because it provides the greatest exposure to the capabilities of underlying hardware. With that exposure, however, comes a requirement to code to the specifics of the hardware, making coding difficult and non-portable. Furthermore, attempts to program FPGAs in C++ have historically suffered in terms of performance. In short, no one has yet come up with a market-winning answer to the tension between performance, portability, and ease of use. However, as a provider of all of the processor types above, Intel has developed a point of view on the best approach to these challenges. As a senior technologist in Intel’s compiler team, Vladimir will articulate that point of view as well as outline how Intel is putting it into practice through its OneAPI initiative (including architecture, tooling, and development status). |
Enabling value extraction from limit orderbook data | |
Applying Machine Learning/Deep Learning techniques to the highly complex, non-linear, and pattern-rich world of financial market microstructure is a significant engineering challenge. One problem is that the data is vast and fast-flowing. Collecting and processing the data into a large number of features to feed the algorithms—not just once, but nearly continuously—is heavy lifting. And continuous retraining of ML and (especially) DL models requires massive, parallelized compute resources. In this talk, Hugh will provide lessons on tackling these challenges that BMLL has learned from building a third-party research platform focused on global limit order book data. Starting with feature engineering examples from futures data, Hugh will articulate the philosophy behind BMLL’s architectural approach, as well as the key elements of a highly scalable processing and analytics pipeline that leverages Apache projects in the cloud to enable AI on order book data. |
 
STAC Update: Big data | |
Michel will discuss the latest research and activities in data-intensive workloads such as tick analytics and backtesting. |
 
Drinking from the firehose: streaming ingest benchmarks | |
Most of the fast data that flows through a financial organization winds up as big data. That is, it's captured in a database somewhere for analysis, either immediately or later. But the process of ingesting high-volume streaming data and making it available through visualizations or query interfaces is challenging and getting more so. This session will examine empirical data from two examples in this problem domain. Peter will first present a benchmarking project on a visualization system designed specifically for realtime streaming data. Then he will present a proof of concept of database ingest tests using event-driven datastreams, which will be proposed for consideration by the STAC-M3 Working Group. |
STAC Update: Fast Data | |
Peter will discuss the latest research and activities in latency-sensitive workloads such as tick-to-trade processing. |
 
How hard could it be? Understanding network traffic at the picosecond level | |
The proliferation of double-digit nanosecond (FPGA based) trading systems is forcing firms to measure things at finer and finer accuracies. Several vendors now offer sub-nanosecond or “picosecond-scale” network measurement technologies. Firms that make use of such technologies need to consider what other changes, if any, they need to make to their measurement infrastructure as a result. Is it feasible to simply “drop in” picosecond scale network measurements, or are fundamental changes in thinking required? In this session, Matthew will offer theoretical and practical viewpoints on the implications of picosecond-scale network measurement techniques. To illustrate these, he will refer to Exablaze’s work with STAC to “upgrade” certain STAC benchmarks to accuracies better than a nanosecond. |
 
Why accuracy-driven markets will transform trading | |
As we've discussed many times at STAC, liquid markets have been in a positive-feedback loop between determinism and latency for the last several years. In an effort to improve fairness, exchanges and other trading venues have become more deterministic, increasing the likelihood that orders that arrive first are executed first. Reducing this uncertainty for trading firms has increased the return those firms can get from reducing their latencies by small increments. The more these firms reduce their tick-to-trade latencies, the larger the impact that small uncertainties in trading venue processing can have on fairness, hence the more pressure venues feel to improve determinism even further. According to Dave, some venues see a way out of this vicious cycle: much more accurately determining which orders arrived first. To the extent this is possible (and Dave will present evidence that it is), venue applications can use these arrival times to determine execution order, thus relaxing the need for deterministic infrastructure. But what will this imply for trading firms? Will this simply replace the determinism-latency cycle with an accuracy-latency cycle that is perhaps even more demanding? Come to hear Dave’s view and join in the discussion. |
About STAC Events & Meetings
STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in finance.
Event Resources
Speakers
Hugh ChristensenBMLL Technologies Nick Kirschatsu corp.
Theo SchlossnagleCirconus Rocky ZayedVAST Data
Charles FanMemVerge Frederic LeensExostiv Labs
Nino De FalcisFSMTime by FSMLabs Vladimir PolinIntel
Alastair RichardsonXilinx Dr. David SnowdonArista
Dr. Matthew GrosvenorExablaze Boni BrunoDell EMC
Matt MeinelLevyx Dan RomanelliUSAM Group (speaking for QuasarDB)
Rebecca KellyKx Systems David TaylorExegy
Laurent de BarryEnyx Cliff MaddoxNovasparks
Vahan SardaryanLDA Technologies Brian GrantMellanox