STAC Summit, 1 Jun 2022, NYC

STAC Summits

STAC Summits bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment.

WHERE
New York Marriott Marquis
1535 Broadway
New York
Astor Ballroom
7th Floor

Agenda

Click on the session titles to view the slides and video (may require member permissions).



What the time warp means for tech
 

Teams may be returning to the office, but the world is hardly "back to normal". It's not just that we're still fighting a pandemic with the global scale of the 1910s. We must also contend with inflation like the 1970s, war like the 1930s, and political risk in developed nations like the 1960s or even 1930s. The financial reactions to these events have some historical analogs: central banks are tightening to fight inflation, and markets are swinging wildly between fear and greed each day. What has no precedent is today's technology. Trading relies on massive automation, entirely new asset classes have sprung from nothing but bits, and the retail investor's smartphone has become a major source of liquidity. Technology should be what enables financial firms to navigate this new landscape, but it won't be easy. Firms have never found it harder to source technical brainpower or the computing power to match it. How do these challenges change what trading firms need from technologists in their firms and partner organizations like FCMs, exchanges, and clearing firms? Our panel of industry heavyweights will explore that question. Don't miss this chance to talk it through with them.

Introducing STAC-MLFast Compute   Machine Learning   
 

Bishop will give an overview of the STAC-ML Markets (Inference) benchmark specifications, as well as present recent research that shows how the benchmark can help optimize stacks for latency and throughput.

“Model” marriages: Pairing implementations and hardwareFast Compute   Big Compute   Machine Learning   
 

Box famously said that all models are wrong but some are useful. Financial firms could add a proviso: “a model is only useful if we can execute it quickly and efficiently”. High performance depends on three variables: the problem size/shape, the implementation code, and the hardware platform. But financial models vary widely, hardware choices expand every few months, and implementation options abound. Technologists can benefit significantly from a systematic approach to understanding how the three variables interact. Luke has just such an approach. With the help of a real-life study that Graphcore performed for a hedge fund (optimizing Cox-Ross-Rubenstein to price millions of options), Luke will explain a process that categorizes and grades workloads and determines the best pairing of implementation and hardware.

Innovation RoundupBig Compute   Big Data   Machine Learning   
  "The $326 Billion Deep Learning Opportunity"
    Pavel Anni, Principal Customer Engineer, SambaNova Systems
  "Accelerating the machine learning development cycle and getting to market faster."
    Parviz Peiravi, Global CTO for Financial Services Industry Solutions, Principal Engineer, Intel Corporation
  "DDN, the AI Data Company for FSI"
    Kurt Kuckien, Vice President of Marketing, DDN

 

Building an ML factoryBig Compute   Big Data   Machine Learning   
 

The potential of machine learning in finance is immense. While still in its infancy, the success of early applications is creating excitement. But excitement can quickly be deflated by cost. For one thing, public and private clouds make it quite easy to utilize massive amounts of technology, rapidly accumulating charges. For another, each ML project can bring new data requirements, different research toolsets, and long lead times, leading many executives to question if it’s worth the expense. If ML is to realize its promise, firms need a model “factory” with two characteristics: 1) a scalable, repeatable research process, and 2) high-performance systems that generate results faster and waste no resources. Building such a factory requires answering many important questions, such as: how do you develop the toolsets and processes that let researchers leverage institutional knowledge and benefit from prior work? how do you ensure a constant supply of data so compute is never idle? how do you accelerate training and backtesting to land faster upon the right model? Join our panel to discuss these questions and more as they explore how to create a machine learning factory.

Treat the cloud as a cloud: Success stories from beyond the virtual data centerBig Compute   Big Data   Machine Learning   
 

Infrastructure teams are under pressure. Data rates are escalating alongside an influx of new processing requirements; volatility and shifting markets are putting a premium on agility; and supply chain issues coupled with distributed workforces are lengthening the lead times for procurement and deployment. Can infrastructure teams use the public cloud to open relief valves? Boni has seen it happen. He has experience with recent, real-life cloud transitions that allowed firms to process hundreds of thousands of transactions a second, perform HPC and ML operations on petabytes of data, and rapidly create innovative new products and services. The key is that these firms don’t simply use the cloud as a "virtual data center" with VMs and storage devices to be managed through “ops as usual”. Instead, they treat the cloud as a cloud—that is, an abstraction responsible for automatically providing and managing resources. Boni will walk us through these use cases to illustrate how firms embracing new operational workflows, cloud-native tools, and infrastructure-as-code can relieve the pressure and quickly find success with cloud deployments.

Maneuvering in the multi-cloudBig Compute   Big Data   Machine Learning   
 

Many financial firms are moving toward a multi-cloud existence. In some cases, firms are mandating a multi-cloud approach to avoid single points of failure. In other cases, trading venues, data distributors, and other service providers are forcing multi-cloud by partnering with different cloud providers. The result is that financial technologists need to navigate an increasingly diverse and fragmented landscape. RJ is here to help with a map. In this talk, he'll plot a course that helps with three areas of concern: developing a multi-cloud strategy, creating a consistent operational approach, and optimizing across multiple deploys. Along the way, he'll address when you should and should not consider multi-cloud and give pointers on preparing for workload mobility. Join him for the journey, and don't be afraid to ask for directions.

STAC time series updateBig Data   Fast Data   
 

Peter will present the latest benchmark results from software/hardware stacks for historical time-series analytics.

Innovation RoundupBig Data   
  "Optimizing Google Cloud for STAC-M3"
   Paul Mibus, Technical Solutions Consultant, Google Cloud
  "The future value of time!"
    Joe Steiner, CTO of Global Accounts and Financial Services, Dell Technologies
  "Lower latency + Zero adoption barrier = Win-Win"
    Pete Kapusta, Senior Director of Sales Engineering, ScaleFlux

 

Will data volume destroy trading economics?Big Data   Big Compute   
 

Financial firms continue to struggle with growing data sets. Firms dealing with a petabyte today are planning for scores, if not hundreds of petabytes, soon. At the rate that data is scaling, an exabyte future is not unimaginable. Causes abound. For example: volatility is increasing inbound data; the output of strategy research is ballooning; distributed ledgers and data-as-code are adding bulk; and system oversight is driving more logging. How do we keep the cost of storage from outpacing the benefit? Our panel of experts will discuss the levers you can pull to manage these loads, such as performant compression, intelligent de-dupe, smaller form factors, and new software solutions. Join us to explore how to gain leverage over data loads without breaking the bank. To kick off, some of the panelists provided a short presentation:

  "Increasing velocity in Financial services with the power of Dell Technologies."
    Joe Steiner, CTO of Global Accounts and Financial Services, Dell Technologies
  "Taming the Petabyte Monster"
    Renen Hallak, CEO and Co-Founder, VAST Data
  "Solving the Write Problem: Pavilion is the ultimate quant playground"
    Costa Hasapopoulos, Chief Field Technology Officer & WW VP Business Development, Pavilion Data Systems, Inc.

 

Streaming to humans: Can open source hack it?Fast Data   
 

No matter how much we automate, getting data to actual humans remains critically important in finance. From desktop, web, and mobile platforms, users need live, updating data in order to derive insights and take action. Over the last several years, the open-source community has developed a number of solutions for streaming data, from ZeroMQ to Kafka to Arrow Flight. These range widely in performance, functionality, and interactivity. Pete Goddard, CEO of Deephaven, will analyze the strengths of these platforms and the gaps Deephaven discovered when building solutions for finance users. Along the way, Pete will describe the way that Deephaven filled those gaps: Barrage, an open-source extension to Apache Arrow that adds support for live updates and ticking data, as well as bidirectional streaming over gRPC with browsers. Join us to compare notes with Pete and see if open source can satisfy your users.

STAC fast data update Fast Data   
 

Peter will present the latest low latency and fast data updates.

Innovation RoundupFast Data   
  "Three blind mice?....See how they run"
    Mike Galime, Director, Finance and Capital Markets, Keysight Technologies, Inc.
  "LDA NeoTap: a leading edge solution for 10/25/40G tap aggregation and timestamping"
    Vahan Sardaryan, Co-Founder and CEO, LDA Technologies
  "Everywhere Resilient High Accuracy Time Synchronization"
    Francisco Girela, Americas WR Tech Responsible, Orolia
  "Thinking Outside The (Black) Box"
    Jon Axon, Managing Director, Packets2Disk

 

Securing low-latency hardware designsFast Data   
 

How secure is the logic in your FPGAs and ASICs from external or internal threats? How do you know? If you have trouble answering these questions, you’re not alone. While the market has gotten a grip on operational risk (especially following the Knight Capital disaster), it’s not clear that it has the same maturity when it comes to the security of “black box” hardware. Simply imagining someone else taking control of one of your low-latency trading systems makes the stakes clear enough. Fortunately, Adam believes that robust hardware security is possible with work, and it's relatively easy to get started and build toward more secure designs. In this talk, he’ll show you comprehensive verification techniques to reduce latent bugs that represent potential attack surfaces. He’ll also cover formal methods to ensure your hardware designs are free from known issues in Mitre Corporation’s public Common Weakness Enumeration (CWE) and protect the confidentiality and integrity of your design assets. Join him to discover techniques you can use immediately and others you can apply over time.

Innovation RoundupFast Data   
  "We Have a Data Latency and Ingestion Problem"
    Alex Stein, Global Head Business Development, Liquid-Markets-Holdings Inc.
  "10GbE and 25GbE Low-latency SerDes PMAs for ASIC Implementation"
    Jeff Lumish, Principal, Silicon Creations

 

Can low latency be composed?Fast Data   
 

Low-latency infrastructures usually avoid abstraction. In the name of speed, firms ruthlessly remove any layer that adds flexibility. Is that about to change? Network equipment is incorporating more functionality into the hardware fabric, while rapid advances in IP cores are turning FPGAs into customizable trading pipelines. Does this provide an opportunity for low-latency trading systems to apply the infrastructure-as-code paradigm that has gained popularity with less latency-sensitive cloud deploys? Is it feasible for code to orchestrate low-latency networks, coordinate hardware-based trading logic, or manage monitoring systems without abandoning the speed and efficiency required to stay competitive? If so, firms could potentially reduce both time to market and costs. Come join our panel of infrastructure experts to discuss if and when composing low-latency systems makes sense.To kick off, some of the panelists provided a short presentation:

  "The Latest Innovations in Low-latency Networks"
    Dr. Dave Snowdon, Director, Engineering, Arista Networks
  "Inline processing of network traffic with FPGAs"
    Davor Frank, Sr. Manager Field Apps Engineering, AMD
  "Fairness through a picosecond lens"
    John Sabasteanski, Distinguished Engineer, Cisco Systems