STAC Summit, 10 November 2015, London

STAC Benchmark Council Logo

WHEN
Tuesday, 10 November 2015

WHERE
Haberdashers Hall
18 West Smithfield, London EC1A 9HQ


AGENDA

Click on the session titles to view the slides (may require member permissions).

 

Big Compute Big Compute

Big Data Big Data

Fast Data Fast Data

 



STAC ExchangeBig Compute    Big Data    Fast Data
 

Vendors with exhibit tables at the conference (click here for the exhibitor list).


STAC update: Compute benchmarksBig Compute
 

Peter will review the latest benchmark results from STAC-A2 (option Greeks), as well as enhancements to the benchmark suite.

From STAC-A2 to the real world: A case studyBig Compute
 

One of the benefits of customer-specified tests like STAC Benchmarks is that the learnings can be applied to real-world situations. In this talk, Robert will provide insight into the algorithmic changes and programmability tools Intel has used to achieve its latest STAC-A2 results. Then Thomas will describe how those transformations and tools led to significant performance improvements in a deployed application at Citi. He will go into some detail on the key algorithms, how they were efficiently parallelized and vectorized, and the performance gains that resulted.

Using Deep Learning to Turn Voice into ProfitBig Compute
 

Voice traders each create over 1,000 hours of calls every year, which are typically consigned to a long-term archive until an investigation resurrects them. Increasingly, banks are trying to implement near-real time call monitoring to supplement keyword spotting in email and messaging. But the purpose is almost always to detect wrongdoing. According to Nigel, very few banks have realized that near-real time text extracts from phone calls can be used to make smarter decisions—including trading decisions. In this talk, Nigel will explain the opportunities to use Deep Learning on voice to boost profitability as well as compliance. As with all Deep Learning, the key is computing power. As part of this talk, Nigel will discuss the engineering challenges such a workload presents and the ways that his firm has learned to speed up that processing many times over previous methods.

STAC Update: Big DataBig Data
 

Peter will describe progress with activities under the "big data" umbrella:
     -Latest enterprise tick analytics benchmark results (STAC-M3)
     -Progress on strategy backtesting benchmarks (STAC-A3)
     -Proposed benchmarks for I/O-intensive areas of risk management
     -Proposed benchmark framework for stream processing for purposes such as trade monitoring.

Innovation Roundup: Round 1Big Data
  "Competition drives innovation – Innovation drives value. STAC M3 at its best"
    Terry Keene, CEO, Integration Systems (a McObject Partner)
  "Cloud Resources to Run More Financial Models - Economically, Easily, and Right Now"
    Matt Provost, Systems Engineer, Avere Systems
  ”Is Big Computing a Headache?"
    Tuomas Eerola, VP of Business Development, Techila Technologies
  "Optimizing Spark Performance on Cray Aries"
    Philip Filleul, Segment Director – Financial Services, Cray
  "Lenovo EBG"
   Dave Ridley, BDM Servers, Lenovo
NVM Accelerated Cache – Solving IO Bottlenecks at ScaleBig Data
 

Modern finance creates data bottlenecks at every turn. Whether it’s a batch risk-management process that pits increasing volumes against fixed deadlines, or predictive machine-learning analytics that need fast response times in the face of a burgeoning number of requests, the speed of executing financial workflows is often gated on the speed with which the firm can get data into and out of applications. Over the past few years, many organizations modernized key systems, adopting high-performance architectures like parallel file systems or in-memory data grids. But business demands have continued to intensify, and underlying technologies have continued to advance. In Glenn’s view, meeting today’s requirements calls for a new approach to data storage and retrieval. In this talk he will explain how current architectures aggravate I/O bottlenecks in data-intensive financial workflows, and he’ll argue that introducing a non-volatile memory (NVM) based accelerated cache solution can deliver orders of magnitude higher throughput for data-intensive workflows, thus enabling faster, more accurate analytics.

High-frequency blockchains?Fast Data
 

Bitcoin has triggered tremendous interest in its underlying architecture: a distributed, cryptographic ledger called a "blockchain". Blockchains (there are several competing frameworks) are intended to enable multiple, unaffiliated parties to log the same event (such as a transaction) in a way that cannot be changed once committed and is therefore non-repudiable. This decentralization of the truth opens up new possibilities to reduce the cost and risk of financial transactions, which dozens of startups and financial firms are actively exploring. But a big problem with blockchains (particularly the implementation behind Bitcoin) is performance. This restricts the application of blockchains to a fairly narrow set of low-frequency transactions. How much of this performance challenge is implementation-specific and how much is inherent in the principles underlying blockchains? What kind of performance is possible? Will it ever be feasible to use blockchains to keep records of (say) public stock transactions? At IBM Research, Tamas studies ways to overcome performance and other blockchain-related challenges. After providing a quick overview for the uninitiated, Tamas will dive into how blockchains work, where the bottlenecks occur, and approaches for overcoming them.

Implementing MiFID II Time Sync and Audit TrailFast Data
 

After more than five years of study and debate, the bulk of the new Regulatory Technical Standards (RTS) from ESMA under MiFID II/MiFIR have been finalized and will officially be in force by the start of business 2017—a scant 14 months from now. These regulations have wide-ranging impacts in areas such as transparency, best execution, and market structure that have attracted great attention in board rooms throughout Europe.

But a few technical rules (RTS 25 and parts of RTS 6 and 24) that seem to have received less attention until now also promise to have a big impact. These rules govern the logging of certain events and the accuracy of the timestamps on those events. Details like the definition of “high frequency trading” and the events that must be logged have big implications for the scope of the challenge, while questions abound regarding the statistical meaning of the new standards and how a firm can verify that it is in compliance. Small and medium sized firms face the prospect of significant engineering to meet the synchronization and logging requirements. And large organizations must grapple with the feasibility and cost of implementing new standards across diverse application and network environments.

How to achieve these things cost effectively—and soon—is the subject of this session at the STAC Summit. Our experts will guide us through an in-depth look at the technical implications throughout a trading enterprise, along with a roll-up-the-sleeves discussion on the alternatives for satisfying the new requirements. The session will consist of four parts:

 

Understanding the challenge

At Credit Suisse, Neil is responsible for the European market connectivity operation, global architecture for market connectivity, and pre-trade risk and regulatory reporting functions. His team has been actively tracking and feeding back on the evolving regulations. Neil is also the Technical Advisor to the Association for Financial Markets in Europe (AFME) and the Co-Chair of the FIX Protocol Clock Synchronization group. Neil will present an overview of the new time sync and record keeping rules, then lay out some of the challenges they pose for various types of market participants.

 

UTC: Where compliance starts

Leon will cover the challenges of UTC synchronization and the pros and cons of contrasting approaches. As part of this, he will summarize the proceedings of a recent Workshop on UTC Traceability, which convened regulators and national labs to discuss high-level issues in UTC alignment across and within geographies.

 

Vendor presentations

A series of vendors will present how they can help market participants meet the new time-sync and auditing requirements.

  • Matthew Knight, Technical Marketing Director, Financial Services, Solarflare [Slides]
  • Matthew Chapman, CTO, Exablaze [Slides]
  • Simon Butcher, Timing Solutions Architect, Microsemi [Slides]
  • Dave Snowdon, Founder, co-CTO, Metamako [Slides]
 

Roundtable discussion

Our roundtable will grapple with the challenges laid out by Neil and questions posed by the audience. This discussion will be highly interactive.

Innovation Roundup: Round 2Fast Data   
  "High Speed Networking in the 25/50/100Gb/s Ethernet Era"
    Richard Hastie, Director, UK Financial Services, Mellanox Technologies

 

STAC Update: Network I/OFast Data   
 

Peter will explain the latest developments related to the STAC Network I/O Special Interest Group.

Fastest news propagation today: A studyFast Data   
 

Highly automated markets react very quickly to news. And for the last few years, news delivery has been subject to the same innovative forces as delivery of market data: increasingly clever ways to reduce latency. What are the best latencies for networks that carry news, and what kinds of techniques do those networks use (microwave, fiber, other)? In this talk, Stephane will present research that attempts to answer these questions for particular delivery paths by making inferences from public market data. After defining a way to detect trading activity attributable to news, he will present findings for news-delivery paths between certain key locations (such as K street, Carteret, Frankfurt) and discuss what those results tell us about the state of the art.

Innovation Roundup: Round 3Fast Data   
  "Achieving Deterministic Ultra Low Latency Market Access using FPGA smartNIC"
    Laurent de Barry, Trading Products Manager, Enyx
  "Tradecope - Low-latency solution for algorithmic and high frequency trading"
    Lukas Valach, Product Manager, Netcope Technologies
  "Low Latency for Multi Markets: US Equities deployment"
    Yves Charles, VP Business Development, Novasparks

 

Leveraging High Performance Market Data at the Enterprise LevelFast Data   
 

Enterprise market data platforms and low-latency market data platforms evolved separately. In Mark’s opinion, that’s an accident of history rather than a logical consequence. In this talk, Mark will argue that the divergence of platforms has become a burden to the industry and that unified platforms are both necessary and possible. In Mark’s view, engineering excellence matters just as much for high-capacity, low-footprint enterprise distribution as it does for low-latency trading. Thus, he will make the case that the best unified architectures start with a high-performance core.

Re-thinking Market Data ArchitecturesFast Data
 

(Click here for a post-meeting note from Nigel on OpenMAMA.)

STAC's recent survey of its global membership revealed that users responsible for enterprise market data technology feel an acute need to re-think their architectures. Top of the list are understanding how to simplify and shrink their infrastructure, with new open APIs as a potential key ingredient. What makes these items a priority today? What does the ideal market data architecture look like? What's realistic and what's pure fantasy? What role do open source licenses and open standards have to play? Will market data always be a technology island or, as more sectors of the economy come to rely on streaming data, is there potential for cross pollination with the broader technology ecosystem? What are the first steps a firm can take toward a better architecture? Our panel of experts will weigh in.

PLATINUM SPONSOR


GOLD SPONSORS

SolarflareDataDirect Networks

Lenovo



MEDIA PARTNERS

About STAC Events & Meetings

STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in finance.