STAC Summit, 2 May 2024, London

STAC Summits

STAC Summits bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in trading and investment.

WHERE
etc.venues
200 Aldersgate, St Paul's
1st Floor, North Wing
London EC1A 4HD

Agenda

Click on the session titles to view the slides.


Navigating fitness-vs-cost tradeoffs of GenAI solutionsAI   
 

The world is now awash in proprietary and open-source large language models (LLMs). These models present a wide range of tradeoffs between fitness for purpose (quality and responsiveness of output relative to the requirements of the use case) and total cost of operation. For example, larger models can be more accurate but usually cost more and are sometimes slower. Many solution-design patterns can improve responses and/or reduce cost, including automated prompt engineering, retrieval augmented generation (RAG), agentic workflows, fine tuning, and model quantization. Enterprises also have a choice between using SaaS LLMs, self-hosting LLMs in the cloud or self-hosting them on prem, each of which has its own tradeoff curves. Choosing the right approach for a given use case--let alone for a central enterprise AI utility--is a big challenge. In this session, Gad will provide an overview of the key solution design patterns and considerations for assessing the multi-dimensional cost implications of different deployment options.

Temporal RAG: Enabling GenAI breakthroughs in trade ideation, execution, and risk managementAI   
 

Getting large language models (LLMs) to provide accurate and relevant output is a well-known challenge. It’s even more daunting in the fast-moving landscape of capital markets. Trading and investment firms care deeply about the sequence of events and how information changes over time. The key to unleashing the power of GenAI for markets is to integrate source content into a timeseries framework and to enable LLMs to operate on the wealth of traditional financial timeseries that firms already amass. Through practical, high-impact use cases in alpha gen, trade execution, and risk, Prasad will argue for a new kind of retrieval augmented generation (RAG) that fully leverages the temporal properties of information.

Doing GenAI at scale this yearAI   
 

Most financial firms experimented with LLMs in 2023. Some have solutions in production, and those who don't are mostly planning to in 2024. Once an initial solution is in production, AI architects will face the consequence of their success: demand for more users and use cases. But scaling generative AI is more complicated than usual. With more use cases come broader model governance challenges. LLM instances can be in short supply, scattered geographically, and diverse in capability. And fixed or variable costs can be "eye watering" (as one hedge fund described their first bills for AI inference). Fortunately, every few days the research and vendor communities supply new models, software, hardware, managed services, and design patterns to tackle these problems. Which of these have the most promise and will be practical in 2024? And what kind of architectural frameworks are best in a world of such rapid change? Our panel of experts will dig into these questions and yours.

STAC update: AIAI   
 

Jack will provide a preliminary look into STAC-AI™, an LLM benchmark suite guided by the priorities of financial firms, which measures a full solution stack -- from the model to the metal

Innovation RoundupBig Data   AI   
  "PowerScale: World’s First Ethernet Storage Certified on NVIDIA SuperPOD"
    Simon Haywood, Field CTO, EMEA, Dell Technologies
  "Lenovo AI and Sustainability Innovations."
    Dave Weber, Director and CTO, Global Financial Services Industry, Lenovo
  "Modern Data Architectures: Preparing Your Infrastructure Strategy for GenAI"
    Mark Lucas, Director Presales Systems Engineer EMEA, Hammerspace
STAC update: Risk computing & tick analyticsBig Data    Big Compute   
 

Jack will discuss the latest Council activities and test results relating to 1) derivatives risk computation and 2) deep time-series analytics.

Innovation RoundupBig Data    Big Compute   
  "How we built an open source time-series database for financial markets"
    Nicolas Hourcard, Co-founder & CEO, QuestDB
  "Open Source Software for HA and SDS for Kubernetes and Virtualization from LINBIT"
    Philipp Reisner, CEO, LINBIT
Surfing the gravity wells: Risk & trading analytics in an AI- and hyperscaler-dominated tech universeBig Data    Big Compute   
 

Banks and hedge funds require more compute, storage, and networking to meet increasing demands for trading and risk analytics. Data volumes continue to balloon, regulations require more simulations, and new market opportunities require new analytics. However, generative AI and cloud have famously become "gravity wells" for the IT industry, driving its product roadmaps. On the one hand, this may increase the options available for financial HPC and data-intensive workloads, driving down long-term costs. On the other hand, AI and hyperscaler architectures can differ in important ways from those of today's trading and risk analytics, which presents challenges. To what extent can the finance industry benefit from the new products coming forth? How much longevity is left in existing approaches? Are there opportunities (or even imperatives) for trading and investment firms to rethink how they design their applications and infrastructure? To kick off, there will be some brief presentations:

  "From Fragmentation to Aggregation: Fearlessly Centralizing Data"
    Andrew Sillifant, Principal Solutions Manager, Pure Storage
  "How a heritage in world leading supercomputing translates into the Age of AI."
    Chris Lacey, HPC & AI Technical Architect, Hewlett Packard Enterprise
  "DDN selected by leading quantitative trading firm for HPC"
    Matt Raso-Barnett, Senior Systems Engineer, DDN
STAC Update - Fast Data & ComputeFast Data   Fast Compute   Machine Learning   
 

Jack will discuss the latest Council activities and test results relating to 1) low-latency LSTM inference on market data and 2) network stacks in the cloud and on the ground.

Innovation RoundupFast Data   Fast Compute   Machine Learning   
  "How fast can my AI model run?"
    Liz Corrigan, Chief Product Officer, Myrtle.ai
  "ÜberNIC... It Just Works..."
    Seth Friedman, CEO, Liquid-Markets-Holdings
  "Adaptive Technology Stack: Aeron, Artio, Agrona and SBE."
    Ed Silantyev, Java Engineer, Adaptive
  "Unleashing Speed: The Pinnacle of Performance in High-Frequency Trading with Hybrid FPGA and Software Solutions"
    Tom Coombs, Vice President of Sales, Orthogone
  "Market Data in the Cloud"
    Peter Hodgson, Senior Engineer, Options Technology
Big distances, tiny tolerances: Making time sync precise over a wide areaFast Data   
 

Building a time-synchronization network spanning multiple data centers within a metropolitan area is a formidable challenge, particularly when the requirements include fault tolerance, nanosecond accuracy, and traceability to UTC(NIST). Quincy Data undertook this challenge, using White Rabbit in Chicago and New Jersey to synchronize across the major trading venues and using GNSS to connect these metros into a single clock domain. Come to hear Mike explain some of the problems Quincy encountered in design and implementation and how they overcame them.

Innovation RoundupFast Data   Fast Compute   
  "Novasparks in Europe: speeding up fragmented markets"
    Sapna Swaly, Business Development, EMEA, NovaSparks
  "The Fast Lane to Connectivity: Unparalleled performance and minimal latency with LDA's cutting-edge product line."
    Vahan Sardaryan, Co-Founder and CEO, LDA Technologies
  "Maximising Efficiency: Trade Cycle Optimisation"
    Matt Dangerfield, Adviser, Telesoft Technologies
  "High performance microburst detection at scale"
    Michael Hu, Director, Product Management, cPacket Networks
  "Order Entry Analytics on a Tap Aggregator"
    Diana Stanescu, Director Finance and Capital Markets, Keysight
Staying cool at speed: Adding 25G to HFT acceleratorsFast Data   
 

Supporting 25G Ethernet can reduce the latency of FPGA or ASIC algorithms--but only if it is implemented well. Signal integrity, power, and thermal challenges exist all the way from the 25G IP, through the package, and across the PCB. In this talk, Michael will explore these challenges and present methods to analyze and resolve them so that you can achieve cool speed with 25G.

Innovation RoundupFast Data   
  "High-Performance Trading with FPGA Accelerators, Low Latency NICs, and server-class processors"
   John Courtney, Fintech Marketing Manager, AMD
  "Beyond the Tick: AMD & Exegy's Clockless Breakthrough in FPGA Tick-to-trade Latency"
    Olivier Cousin, Director, FPGA Solutions, Exegy
  "Latest Cool Products from Shengli Hardware Lab"
    Louis Liu, CEO, Shengli Technologies
Threading the needle: Navigating constraints to compete in real timeFast Data   
 

To stay in the game, trading firms must manage ever-growing data rates and keep their architectures competitive, whether it’s making software faster or hardware smarter. But mounting requirements for regulation, compliance, and cyber are straining resources. Meanwhile, finding well-trained talent is only getting harder. What are the best strategies to navigate these constraints? What are the best buy/hold/sell strategies for technologies across the spectrum, from FPGA and ASIC to private clouds? Where does it make sense to buy third-party logic today? Our panel of experts will weigh in.