STAC Summit, 19 Oct 2022, NYC

STAC Summits

STAC Summits bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment.

New York Marriott Marquis, 1535 Broadway, New York
Astor Ballroom


Click on the session titles to view the slides (may require member permissions).

STAC big compute updateFast Compute   Big Compute   

Peter will present the latest benchmark results for big compute stacks.

Innovation RoundupBig Data   Big Compute   Fast Compute   
  "STAC-A2 Benchmark Results in AWS"
    Alket Memushaj, Principal Solutions Architect – Capital Markets, Amazon Web Services
  "Nutanix accelerating cloud adoption in Capital Markets"
    Lev Goronshtein, Advisory Systems Engineer, Nutanix, Inc.


Multi-cloud patterns: Lessons from experienceBig DataBig Compute   

Each cloud provider offers a wealth of compute, storage, and workflow options. HPC architects are no longer constrained by in-house equipment or limited by lengthy procurement processes. But the new wealth of options has a downside: figuring out what offerings are right for a given workload. The challenges grow tougher with every new cloud provider added to the mix, including seemingly unlimited combinations of offerings, integrations, and ever-evolving operational tools. Architects seem forced to either conduct lengthy evaluations or settle for sub-optimal solutions. Andrey will share some thoughts on simplifying—while still benefiting from—a multi-cloud approach. He will discuss the challenges his team faced and how they successfully overcame them. He will also present a framework for success that he developed while architecting and deploying Bloomberg’s multi-cloud compute infrastructure. Don’t miss this chance to learn from his team’s experience and find more value from multi-cloud compute.

Beyond lift-and-shift: Optimizing cloud storage for I/O-bound workloadsBig DataBig Compute   

According to many user firms within the STAC community, storage-centric workloads continue to be prime candidates for a cloud or hybrid model and are mostly undergoing a "lift and shift". But once the initial move is complete, how do technologists optimize for performance? What architectures make sense when configuring storage for time series databases and high-performance analytics? Joel will walk us through some real-life examples to answer these questions and more. Along the way, he'll provide information on profiling I/O to help you optimize your cloud-based and hybrid storage systems.

STAC time series updateBig Data   Fast Data   

Peter will present the latest benchmark results from software/hardware stacks for historical time-series analytics.

Innovation RoundupBig Data   Fast Data   
  "Beyond Speeds and Feeds"
    Joseph Steiner, Global CTO of Financial Services, UDS Division, Dell Technologies
  "Interoperability vs performance: can you have the best of both?"
    Jack Kiernan, Senior Presales Engineer, KX
  "Speed or Scalability? I’ll take both!"
    Jonathan Malamed, Director, Unstructured Data , Pure Storage
  "Starfish: Get a Grip on File Storage"
    Jacob Farmer, Founder and Chief Evangelist, Starfish Storage


STAC machine learning updateMachine Learning   

Bishop will discuss the latest research and Council activities related to machine learning, including updates on inference and training workloads.

Innovation RoundupMachine Learning   Big Data   
  "AI in Financial Services"
   Amr El-Ashmawi, VP Vertical Markets, Groq
  "IBM Global Data Platform for AI/ML"
    Matthew Klos, Senior Solutions Architect, IBM


How to improve performance in open source AIMachine Learning   

The primary focus of open source AI software has been functionality and usability. To keep up with users' initial needs, projects have prioritized breadth of coverage, including support for hardware, models, and data-pipeline integrations. But the more AI is used in production environments, the more users' needs are shifting to speed and scale. Rachel thinks that performance is critical for the next generation of open source AI projects. After reviewing the current AI landscape and the challenges data scientists and AI engineers often face when they need to speed up and scale up their production pipelines, she will present key improvements the open source community has provided in response. Using currently available projects, she will show you how to exploit these performance innovations while maintaining functionality, through drop-in replacements and just a few lines of code.

Optimization strategies for the DL storage stackBig Data   Machine Learning   

Many capital markets technologists have experience optimizing storage architectures for workloads like time-series databases and backtesting. But the training phase of Deep Learning is a different workload, with a complex mix of high-concurrency reads, checkpointing, and random large-block mmaps. The storage stack (client, network, file system, and storage hardware) must cater to these access patterns during initial model training as well as retraining. Otherwise, I/O bottlenecks can leave costly compute resources underutilized and delay model deployment. (Few storage architects want to be responsible for missed business opportunities.) In this talk, Sven will help you design more efficient AI solutions by exploring what happens during DL training from the storage system's point of view. Using lessons from real-world examples, he'll discuss the implications for the storage stack and walk through optimizations for the file system and the rest of the datapath.

What CXL means for FSIFast DataBig Data   Fast Compute   Big Compute   

At STAC two years ago, Dr. Matthew Grosvenor gave a primer on competing interconnect standards and explained why he viewed Compute Express Link (CXL) as the strongest contender from a technical standpoint. Having the support of all the major processor and systems vendors hasn’t hurt the standard, either. Today CXL-enabled solutions are starting to ship. What do you need to know? Bhushan will dive deep into CXL, explaining how it works and what advantages it provides. He'll also show how CXL interfaces an FPGA with a CPU, impacts the flow of data, and accelerates storage networking and memory-driven applications like realtime data processing and low-latency trading. Bring your questions and join Bhushan to understand this emerging technology.

STAC fast data updateFast Data

Peter will discuss the latest research and Council activities related to low-latency/high-throughput realtime workloads.

Innovation RoundupFast Data   
  "AccuCore HCF™ Low-Latency Amplification and Optical Transmission"
    Daryl Inniss, Director, OFS Fitel
  "Time Synchronization: Top 5 real life uses"
    Francisco Girela, BizDev and Sales Eng Manager, Orolia
  "Revolutionizing end-to-end data analytics for alpha generation"
    Harley Semple, Director Data Solutions, Options Technology
  "We are ready for the new SIP"
    Pierre Gardrat, CTO, NovaSparks


Low-latency market integration: Time to rethink buy-vs-build?Fast Data   Fast Compute

Every trading firm faces a choice in how to support latency-sensitive trading strategies with data, analytics, and execution in a new market: should we build and operate our own software/firmware, delegate it to a vendor, or do some of both? Once made, the decision often lasts many years. Do recent trends in the vendor landscape warrant a rethink on what to buy or whom to buy from? M&A has brought together companies in low-latency market data, order entry, historical tick data and analytics, hardware development, network connectivity, and operational support. The emerging product combinations claim to offer deeper and broader value than before. How close do the new offerings get to the ideal of plug-and-play market access? What sort of latency sensitivities can they serve? Are vendors’ engagement models changing along with the products? How are solutions satisfying rising demands such as cloud delivery and secure access, and how will they evolve as customer needs change? Join our panelists to add your questions to the mix. To kick off, some of the panelists provided a short presentation:

  "The 5 biggest misconceptions regarding low latency trading technology"
    Laurent de Barry, Director, Hardware Trading Solutions, Exegy
  "Unleashing the Value of Market Data"
    Patrick Flannery, Group Leader, Low Latency, Refinitiv, an LSEG Business
STAC FPGA SIG updateFast Data

Peter will present updates from the FPGA special interest group.

Innovation RoundupFast Data   
  "Accelerating Trading Strategies with AMD Next Gen Low Latency Trading Platforms"
    Hamid Salehi, Director of Product Marketing, FinTech & Blockchain, AMD
  "Latency & Throughput - Your NIC Can’t Keep up With Modern Markets – Ours Can"
    Seth Friedman, CEO, Liquid-Markets-Solutions
  "Ultimate latency reduction with LDA 10/25/40G product suite."
    Vahan Sardaryan, Co-Founder and CEO, LDA Technologies
  "Race to Ultra Low Latency ASICs with Embedded FPGAs"
    Bill Jenkins, Director Product Marketing for AI/Military/Financial, Achronix


Accelerating "time to first trade" with FPGAsFast Data   Fast Compute

As trading strategies requiring ultra-low latency have become more complex, so have the designs firms are implementing in FPGAs. In fact, design size and complexity are outpacing headcount – a recipe for stalled projects, underperforming solutions, and buggy implementations. At a time when it is critical to quickly adapt to fast moving markets, hardware engineers must find a way to step on the gas. The software world relies heavily on continuous integration (CI) to get to market faster, but the approach is still underutilized in hardware. In this talk, Mark will lay out a faster path to “go live” via an easy-to-adopt methodology that maximizes CI throughput, exploits automated, exhaustive RTL code inspection, and assures code quality throughout the development process. Join him to learn ways to accelerate your FPGA projects’ time to first trade.

How to shine a light on full FPGA and ASIC performanceFast Data   Fast Compute

A straightforward way to design low-latency hardware is by distributing the performance budget across the system's blocks. The design achieves its latency goals if the performance measured in each block meets its budget in all use cases. But this direct approach may not address all problems. What about synchronization between blocks, a saturated communication infrastructure, or the order of the algorithm? Performance issues in these areas can remain in the dark with a block-level approach. Adam will show how to illuminate each layer of the hardware design and find unanticipated bottlenecks. He'll walk through performance measurement methods you can use immediately at the block, subsystem, and system levels to better understand and improve your performance.

Advancing open source in FPGAFast Data   Fast Compute

FSI software engineers have long benefited from abundant open-source projects, but firmware engineers still have few good options. This was unsurprising when the developer community was small, but now that FPGA development is widespread in finance and other industries, the time is ripe for change. How can financial firms propel the evolution of open source in FPGA? What projects can benefit from collaboration without participants losing proprietary advantages? In what ways can firms pool resources, work with vendors, and leverage the global community to reduce business costs? What changes are needed to both open- and closed-source toolchains to accelerate collaborative projects? Join our panel of experts as they discuss these questions and yours. To kick off, some of the panelists provided a short presentation:

  "Open-Source Pros & Cons in the FPGA Development Flow"
    Stephen Kopec, North America, Data Center FAE Manager, AMD
  "Low-latency layer 3 and FPGAs"
    Darrin Machay, Principal Engineer, Arista Networks


If you have speaker or topic suggestions, please submit the form below:

Give us some information on the speaker or topic you are suggesting.