Global STAC Live, Fall 2021
STAC Summits bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment.
In fall of 2021 we again combined our New York, Chicago, and London events into a massive virtual event for the worldwide STAC Community.
Click on the session titles to view the recording and slides.
(REMINDER: All materials displayed through the website are subject to copyrights of STAC or other parties.)
|Is matching marching to the cloud?
At last fall's Global STAC Live, Nikolai Larbalestier presented Nasdaq's view that exchanges will move to the cloud. The debate it prompted was immediate and intense, with reactions ranging from “the shift is imminent” to “it will never happen”. A year later, the discussion continues, with many questions still left to answer. Market data distribution is moving to the cloud already but will order entry and matching follow? Are some markets more suited for the cloud than others? What are the technical hurdles, and are they real, perceived, or both? If exchanges move, will customers follow? Join our panelists as they debate the future of trading in the cloud. To kick off, one of the panelists provided a short presentation:
| "Making AI real in capital markets."
Ulku Rowe, Technical Director, Financial Services, Google
|Squeezing every drop from a cloud: Lessons in full-stack optimization
Financial firms are moving mission-critical workloads to the public cloud. While high-performance requirements remain the same, cloud infrastructures—and the performance optimization challenges they present—are undergoing significant change. Doshi has extensive experience dealing with these challenges. He and his team at Intel recently helped a capital markets firm discover and remedy performance bottlenecks that resulted from a move to the cloud. He'll walk you through the investigation, point out where it differed from similar on-prem work, describe the results, and summarize the lessons learned. Along the way, he'll highlight the tools Intel leveraged to find and solve the problems they encountered.
|STAC Low Latency Update
Peter will present the latest in fast data at STAC, including an update on the FPGA Special Interest Group.
|Hacking the packet in ASIC with eFPGA
Engineers who need to stay at the front of today’s latency race must make a critical architectural decision: FPGA or ASIC? ASICs can speed up network IO, but the R&D process is complicated and the risk is high. FPGAs pair the efficiency of hardware with programmability, but new FPGA architectures have some engineers questioning FPGA’s future for ultra-low latency. So which should you choose? Dean thinks both. In his view, the answer is an ASIC with an embedded FPGA (eFPGA) block, an approach that combines the benefits of ASIC and FPGA. After giving a brief overview of eFPGA, Dean will offer a view on what logic should be baked into hardware and what should remain programmable by the trading firm. He’ll also discuss available IP and ASIC design resources. If you’re interested in obtaining the speed of ASIC while maintaining an ability to "hack the packet" and bring your special sauce to hardware, you won’t want to miss this talk.
|Best practices in design for FPGA or ASIC
For years, achieving ultra-low latency in trading has required hardware acceleration, traditionally via FPGA and increasingly via ASIC. But Adam is quick to remind us that hardware acceleration is necessary but not sufficient. It’s possible to end-up with slow logic in hardware so knowing how to build fast logic is key. Moreover, getting the right result under all conditions is just as important as getting the result quickly. Which is all to say: design matters. In this talk, Adam will cover methods that help you identify latency bottlenecks in your logic design, whether targeting FPGA or ASIC. He'll include ways to reduce states, resets, and reliance on centralized resources while identifying and avoiding race conditions. Along the way, he'll explore how to verify boot sequences, ensure data consistency, and verify logic against the complex conditions that arise in hardware design.
|STAC Time Series Update
Peter will present the latest benchmark results from software/hardware stacks for historical time series analytics.
|Automating the delivery of insight
Morningstar is a data-driven company, and data is the fuel for the insights they deliver to customers. Nearly every group within the company has quantitative analysts, data scientists, and machine learning engineers. The central Core Analytics Platform team provides them with the tools and services they need to deliver their research as quickly as possible. In this talk, Jin will describe how the team used open source technologies and cloud services to create two innovations to scale the delivery of insights: Analytics Lab, a data science platform tightly integrated with the Morningstar Data Lake, and Starflow, a framework for packaging up a model/engine for self-service deployment.
|Facebook's open-source time appliance
To deal with huge data growth, Facebook has been moving to a tightly coupled distributed data infrastructure. But preserving event order across highly dispersed nodes requires a time synchronization solution that is both highly accurate and highly scalable. So Facebook created Time Card, a PCIe card with a multiband GNSS receiver, an onboard miniature atomic clock, and a time engine implemented in FPGA. Then they open-sourced it within the Open Compute Project in an effort to drive costs low enough to allow deployment in massive numbers. Join us as Ahmad explains Time Card and how Facebook uses it to enable faster performance from distributed databases as well as more advanced management of global infrastructure.
|Accuracy as a commodity: How Time Card will benefit applications
As just described by Ahmad Byagowi of Facebook, Time Card brings highly accurate and scalable time sync to Facebook's global infrastructure. But Facebook open-sourced Time Card on the premise that other industries could benefit. What about financial services? One use case is trade execution, where inexpensive atomic clocks in every node could be very advantageous. But in Intel’s opinion, that’s not where it ends. David will argue that changing the economics of time sync will benefit databases, AI/ML, and other architectures that rely on understanding or controlling a sequence of events—from market data analytics and pricing algorithms to risk management and fraud detection. Dan will go on to explain how application developers can integrate the precise time functionality of Intel’s time cards into their mission-critical operations. If you care about big workloads or fast workloads, you'll want to attend this talk.
|Alerts ain’t enough: Turning system-insight into action
As any firm that’s been burned by a production glitch knows, managing operational risk is just as critical as managing market, credit, or liquidity risk. They all threaten the firm’s capital. As a first line of defense, operations staff need system visibility that is immediate, accurate, precise, and comprehensive. However, when markets move in milli-, micro-, or nanoseconds, providing visibility to humans is not enough. Today’s trading demands automated actions in response to problems. Yet many questions stand in the way. How do we best correlate events across systems? How do we automate root-cause analysis? How far can trading algorithms take their use of realtime IT telemetry input? Our panel of operations experts will tackle these questions and others on how to break down the walls between the conservative world of IT and the risk-taking world of trading. Join us to add your questions to the mix. To kick off, some of the panelists will provide a short presentation:
| "Is Time Synchronization in the Cloud a Real Challenge for Financial Companies’ Digital Transformation?"
Areg Alimian, Sr. Director of Product Management, Keysight Technologies, Inc.
| "Future-Proof Transition to Cloud with a Unified Observability Strategy"
Andy Idsinga, Cloud Engineering Manager & Sr. Cloud Architect, cPacket Networks
| "Hi-fidelity trading analytics delivered 'as a service'. The trend towards outsourced and evergreen deployments."
Gareth Mason, Senior Innovation and Engineering Lead, Options Technology
|STAC Big Compute Update
Peter will present the latest benchmark results for big compute stacks and announce STAC's Inference Benchmark.
|Making a new ML forecasting approach practical
Using ML to predict prices requires firms to confront a fundamental tension: finding smarter models tends to require more complex modeling techniques, while more complex techniques tend to require substantially more compute (which translates into higher training cost, longer time-to-market, or both). In this talk, Stefan will explain how his team resolved this tension in a recent case. According to Stefan, the team developed a novel multi-horizon forecasting model for limit order books that significantly outperformed existing methods but was quite computationally expensive. After some research, the team overcame the compute bottleneck by moving from traditional hardware to Graphcore's Intelligence Processing Unit. Come to his talk to hear (and ask questions) about the model, the technical implementation, and the numbers.
|The trader of the future—revisited
Last fall at STAC, John argued that the future belongs to the trader who leverages the Python ecosystem, including ML/AI tools, to create smarter strategies faster. Part of this was using NLP to provide better contextual intelligence to trading and investing. That was so last year. This fall, John will discuss some of the latest advanced methods the NLP community has developed to use massive compute to create new kinds of natural language understanding (NLU). In particular, he will focus on how financial firms can use NLU to generate synthetic data that can be put to profitable use (and no, he doesn’t mean Reddit posts).
|Solving ML's data problem
Machine learning research continues to grab headlines with increasingly powerful achievements. But practitioners know that making ML work on the ground is a very different story. In particular, they sing a constant refrain: "I don't have an ML problem; I have a data problem." Good data pipelines are absolutely essential for ML projects to succeed. But they aren’t easy. Data scientists must leverage new techniques—sometimes themselves even including ML—to ensure data is appropriately cleaned, enriched, and otherwise ready for feature engineering. At the same time, data engineers must ensure constant availability and minimize costs even while data sizes and data demand continue to expand. Join in as our panel of experts discuss how to solve these challenges. To kick off, some of the panelists will provide a short presentation:
| "DDN Parallel Storage for Faster Financial Analytics"
James Coomer, Sr. Vice President Products, DDN Storage
| "Develop your models faster, more accurately and for less cost by leveraging deep learning and large datasets"
Bob Gaines, Managing Director Financial Services, SambaNova
|Time series as a first-class citizen in Python
Python's data science ecosystem has been an indisputable boon to quantitative analysts. However, the most common data type that quants deal with—time series—has been a second-class citizen in the Python world. Compatibility issues between time series libraries slow research, while popular ML libraries, such as scikit-learn, focus on non-temporal data. Stepping up to solve these problems is sktime, an open source unified framework for machine learning with time series. As a key contributor to sktime, Markus will walk us through its critical features, such as time series transformation, classification, and forecasting. He'll cover recent updates to the algorithms and pipeline tools that make this possible, as well as how sktime interfaces with existing libraries, such as scikit-learn, statsmodels, and fbprophet. Come to learn more about sktime and ask Markus your questions.
About STAC Events & Meetings
STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in finance.