Global STAC Live, Spring 2021
STAC Summits bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment.
In spring of 2021 we again combined our New York, Chicago, and London events into a massive virtual event for the worldwide STAC Community.
Click on the session titles to view the recording and slides.
(REMINDER: All materials displayed through the website are subject to copyrights of STAC or other parties.)
|Cloud deploys: How not to drown in a sea of choice
While freedom to choose is generally a good thing, the dizzying array of choices a technologist faces when moving a workload to the cloud is often a significant impediment to migration. The choices start at a high level (private, public, or both? DIY, single-provider, multi-provider, or some combination?). Then comes the tooling (use cloud-specific tools or third party options that promise portability?). And beneath each of these choices are dozens of competing options at successive layers of the stack. What’s more, the choices don't end once a system goes live because use cases and cloud offerings continue to evolve. How do you maintain efficiency while optimizing for performance and price? It can be overwhelming, but our panel of experts will help you keep your head above water. To kick off, some of the panelists will provide a short presentation:
| "Trading Analytics and Infrastructure: Blue Sky between the Clouds"
Matthew Cretney, Head of Product Management, Beeks Group
| "Densify: Leveraging Infrastructure Analytics for Cloud Performance and Cost Optimization"
Andrew Hillier, Co-founder & CTO, Densify
| "Getting the Best Value for Your Technology Spend"
Brian Kelly, CEO & Founder, CloudGenera
| "Real-time Continuous Optimization"
Asaf Ezra, CEO, Granulate
|Open source in FSI: more leverage, less risk
Most financial firms realize that code from outside the enterprise permeates their own. But during a career auditing open source usage, Jeff Luszcz has actually put numbers on it: about 95% of all codebases contain open source and 90% of all open source usage is unknown and untracked. How should senior tech leaders get an understanding of their organization's use of open source? How do you ensure a safe software supply chain without adding friction to the development process? What are the best practices for contributing to projects without leaking IP? As Director of Open Source at PEAK6, Jeff spends his days answering these very questions. Join us as he shares ways to further the benefits of open source while reducing its risks.
|STAC Update: Fast Data
Peter will present the latest benchmark results for fast data stacks and provide and update on the FPGA Special Interest Group.
|Moving beyond the false dichotomy of FPGA solutions
“Build versus buy” is the question many teams set out to answer when first applying FPGA to a workload, but the answer is rarely so simple. Since third-party products seldom meet 100% of a firm’s requirements—yet every delay in deployment means unrealized profit—firms must blend off-the-shelf components with custom logic. But how should a manager draw the line between them? And what does that decision process look like? Using Quincy Data's first FPGA product as a working example, Mike will walk through an evolution from purchasing off-the-shelf offerings to outsourcing custom-made solutions to hiring an FPGA dev team. Along the way, he'll cover how each approach is necessary, point out lessons learned, and explain how an in-house team allows Quincy to blend custom IP with purchased components to deliver a better product.
|Research in realtime
Exogenous shocks–everything from tweets to pandemics to Reddit posts–shift markets, disrupt profitable strategies, and alter risk profiles. Firms that adapt to changing market dynamics survive; those that do so the fastest thrive. In order for alpha generation, risk management, or surveillance to be agile, the supporting research platform must allow a firm to search for signals or anomalies across historical and live data extremely quickly and to test algorithm changes just as fast. But this is easier said than done. How are systems today bridging the divides between live, recently live, and historical data? How well can platforms combine batch processing and streaming analytics? What is the impact on research and production workflows? How does this change the role of the quant? Once a platform enables quants to see the full picture, can it also help them avoid problems like overfitting or bias? Our panel of experts will discuss these questions and more. To kick off, some of the panelists will provide a short presentation:
| "Accelerating Analytics by Closing the Memory Storage Divide"
Steve Scargall, Persistent Memory Technical Sales Specialist, Intel Corporation
| "KX Streaming Analytics Platform & Time-Series Database"
Glenn Wright, Systems Architect, KX
|STAC Update: Time series stacks
Peter will present the latest benchmark results from software/hardware stacks for historical time series analytics and provide an update on streaming analytics benchmarks.
|Secure, low-friction data access with SPIFFE
Data is the lifeblood of the financial services industry. Critical workflows and personnel need immediate, low-friction access to data from multiple locations and devices. At the same time, data is also an enormous risk. Financial firms must control data access in order to comply with regulations and contracts and to protect their IP. How should data architects work within security policies to ensure that access is both easy and controlled? How can they keep up with constantly evolving data sets and security requirements? Bloomberg has deployed SPIFFE, an open-source technology in incubation at the Cloud Native Computing Foundation (CNCF), to simplify and streamline access to data while making security controls easy for developers to manage. Using his experience with this project as a jumping-off point, Phil will introduce SPIFFE and explain how it keeps data flowing where it should and not where it shouldn’t.
|Scaling the self-sufficient quant
An unaided quant taking an idea all the way from seed to production is an ideal too often unrealized. Most quants can explore opportunities on their desktop, and some of them can spread their tests across a few servers. But scaling research across years of tick data, hundreds or thousands of instruments, and thousands or millions of algorithm variants usually requires dedicated engineering resources. Moreover, moving to production requires a whole new level of IT involvement. At each stage, costs rise, time-to-market lengthens, and opportunities disappear. How can a research infrastructure speed up this process and empower a self-sufficient quant through the entire lifecycle of strategy development? What role can off-the-shelf software frameworks play? What parts of the data and analytics pipeline can become shared, managed resources? In what ways can hyperscaling technology or hardware acceleration help to manage massively parallelized research? Join our panel of experts as they discuss how to make the scalable self-sufficient quant a reality. To kick off, some of the panelists will provide a short presentation:
| "GCP and the self-sufficient quant"
Ashish Majmundar, Director, Global Head of Capital Markets, Google
| "Bigstream Hyperacceleration: The Analytics Performance Advantage"
Bishwa Roop Ganguly, Chief Solutions Architect, Bigstream
|STAC Update: Big Compute
Peter will present the latest benchmark results from STAC-A2 audits, as well as an update on STAC's AI inference benchmark.
|Overcoming data gravity: How to liberate applications from the pull of data stores
A decade’s worth of moving compute to the data has led to great focus on "data gravity": the tendency of a data store to attract a mass of applications that need to be near to it. The concept is often invoked as an obstacle to moving data stores, such as from on prem to the cloud. But the challenge is broader: applications today must respond to many forces that oppose data gravity, such as the need to run wherever price-performance for compute is best or where proximity to other resources (e.g., an exchange matching engine) is crucial. As the number of workflows increases throughout an enterprise’s data centers and outside its firewalls, and as those workflows become increasingly elastic, how can storage architects best supply high-performance access to a wide array of datasets with years of history? What role do fat pipes, data tiering, and smart movement of datasets have to play? Join our panelists as they discuss how to accelerate time to insight using state-of-the-art storage architectures. To kick off, some of the panelists will provide a short presentation:
| "A modern data storage architecture that dramatically improves wall clock time."
Shimon Ben David, CTO, Weka
| "2 Years In: How New Data Reduction Can Achieve HDD Economics For All-Flash Market Data Storage"
Jeff Denworth, Co-Founder and CMO, Vast Data, Inc.
|Moving compute to the network
The STAC community is no stranger to the yin and yang of trading algorithms: increasing sophistication and decreasing latency. That is, competitive pressures require ever more complex decisions in increasingly tight timeframes. One critical factor in this race is the expanding range of processors capable of fast computation, but equally important is how close that computation can get to the network. The goal is to have as little standing between the trading venue and the trading decision as the laws of physics will allow. What is the state of the art today? Which kinds of computations can get significantly closer to the wire and which cannot? What innovations will shift that balance? Specifically, what role will CPUs, GPUs, FPGAs, matrix processors, DPUs, NICs, and switches play in this game? As logic moves deeper in into the network, how can a firm oversee it without creating lots of overhead? Our panel of experts will debate. Bring your thoughts and questions to explore the possibilities. To kick off, some of the panelists will provide a short presentation:
| "NVIDIA is reinventing the data center"
Bill Webb, Director, Ethernet Switching, NVIDIA Networking, NVIDIA
| "Nexus 3550 & Nexus SmartNIC"
Dan Brown, Technical Solutions Architect, Ultra Low latency, Cisco Systems
|Low latency cloud connectivity: A practical approach
As more trading desks enter the cloud-first cryptocurrency market and many traditional exchanges and dark pools explore moves to the public cloud, the need for low latency cloud connectivity is taking center stage. But how can a networking team extend their low latency infrastructure to the cloud? How fast, consistent, and reliable can connectivity be when using the shared, managed resources found in public clouds? What best practices ensure the network isn't a trade's weakest link? Ilya will explore these questions and more as he walks through a practical approach to low-latency cloud networking.
About STAC Events & Meetings
STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in finance.