STAC Summit, 5 Jun 2017, NYC
STAC Summits bring together industry leaders in architecture, app dev, infrastructure
engineering, and operational intelligence to discuss important technical challenges in
the finance industry. Come to hear leading ideas and exchange views with your peers.
WHERE
New York Marriott Marquis, 1535 Broadway, New York
Astor Ballroom
Agenda
Click on the session titles to view the slides and videos (may require member permissions).
 
Big Compute
Fast Compute
Big Data
Fast Data
 
STAC Exchange | |
Vendors with exhibit tables at the conference. (click here for the exhibitor list). |
STAC Update: Big Compute | |
Peter will discuss the latest research and Council activities in compute-intensive workloads. |
Using deep learning to trade: Practical lessons for technologists | |
Qplum is a hedge fund that successfully applies Deep Learning (DL) techniques to trading and investment. At the last New York STAC Summit, its co-founder and machine-learning expert, Gaurav Chakravorty, provided a few hints at the fund's approach to DL (see the video here). So we asked him back for a deeper dive. In this talk, Gaurav will detail the kinds of DL that Qplum has found useful, as well as some of the blind alleys they've gone down. He will then address some of the technical and operational aspects of trading using DL. What are the key bottlenecks in training and inference? Which software frameworks and which hardware platforms have proven most useful for those workloads? What does a deployment look like? What are the scaling challenges and the key drivers of cost? Also, how does DevOps work when a lot of the "Dev" is handled by machines? |
Wall Street and the race to IT agility | |
For decades, business people have dreamt of a world where information technology was infinitely flexible: quickly and automatically scaling up and down in each location as business conditions demand. At last that dream seems to be nearing reality. The innovations keep coming, from virtualization, containers, and private cloud frameworks to software-defined everything and public clouds offering you-name-it-as-a service. While some areas of trading organizations can exploit this new agility paradigm, other areas face constraints, especially the front office. The need to minimize latency constrains both the location of resources and the tolerance for abstractions. Business logic needs to change frequently in response to market conditions, which requires a lot of data to be collected and analyzed with increasing sophistication. How are leading firms managing infrastructure scattered across their headquarters, dozens of colocation centers, and perhaps public clouds? Which workloads are the low hanging fruit? Which agility-enabling technologies are most relevant to high-performance financial applications? And what about the data? Liberating computation to roam free is great in theory, but not if the required data acts as a ball and chain. Our cross-functional panel will dig into these questions and more. |
STAC Update: Big Data | |
Peter will discuss the latest benchmark results involving big data workloads, including tick analytics, resource management in multi-tenant clusters, and optimizing Spark performance. |
Innovation Roundup | |
“How to Predict Yesterday’s Weather with 100% Accuracy” Ike Nassi, Chief Technology Officer, TidalScale |
|
"High Performance Software Storage for Financial Modeling" Bjorn Kolbeck, co-founder/CEO, Quobyte |
|
"Bringing STAC-M3 to Life: Kx for DaaS and Thomson Reuters' VA8" Fintan Quill, Senior Pre-Sales Engineer, Kx Systems |
|
"Quasardb: Time series at scale." Edouard Alligand, President and CTO, quasardb |
|
“The HDF Group: Thought Leader in Exascale Computing Takes on Real-Time PCAP Ingestion, Storage, and Analytics” David Pearah, CEO, HDF Group |
|
“Realtime Persistent Computing on Google Cloud and Intel” Matt Meinel, Senior Director, Business Development, Financial Services, Levyx |
 
The brave new world of big I/O | |
Analytic demands in finance are putting tremendous pressure on data infrastructures, whether its quants and AI programs searching for optimal trading and investment strategies or risk systems generating all the scenarios required by new regulations. At the same time, there’s momentous change on the supply side. The economics of flash and interconnects make it possible in theory to achieve leaps in both performance and performance per dollar (current shortages notwithstanding), while cloud providers continue to play a more significant role. With these trends clearly in sight, a new generation of innovators has begun to bring products to market that re-think how data is stored and accessed and how much that should cost. Their points of view have implications for how application architects think about data access. Our panelists will debate a number of alternative approaches. |
STAC Update: Fast Data | |
Peter will discuss the latest research and Council activities related to low-latency/high-throughput workloads. |
Innovation Roundup | |
“A deeper look into the network” Bill Webb, Director, Ethernet Switch Sales, Mellanox |
|
"Solarflare Updates" Davor Frank, Senior Solutions Architect, Solarflare Communications |
|
"Metamako platforms: Robust, flexible, controllable." Dr. David Snowdon, CTO, Metamako |
|
"Beyond Latency." Laurent de Barry, co-Founder and Trading Products Manager, Enyx |
|
"Product Update: NovaLink 30% latency drop, New feeds and features" Arnaud Lesourd, Senior Application Engineer, NovaSparks |
 
Traceability reporting for timestamps | |
Organizations that are subject to timestamp-accuracy regulations such as Europe's MiFID 2 regulations (which apply to many US firms) and the CAT NMS plan must be able to demonstrate to regulators that they have used best practices to ensure compliance. Monitoring is part of this, but so is testing, which covers things that can't be monitored or that need to be prevented. The European regulators have in fact said that "relevant and proportionate testing" is required. But how does a firm persuade a regulator that its testing has followed best practices? And how should it relate its testing back to the ultimate goal of timestamp traceability? The STAC-TS Working Group has been hard at work to answer these questions with industry standards and the tools to support them. Peter will provide a brief review of what it has delivered and what is coming. |
Establishing a 10GbE timestamp reference | |
The accuracy with which we can measure the latency of a network-attached device is limited by the relative accuracy of the network timestamps used to calculate the latency. Likewise, measuring the absolute accuracy of hardware or software timestamps relative to a reference like UTC or NIST requires a device with higher absolute accuracy than that of the subject timestamps. So trading firms are keen to have a reference network device that is more accurate than anything else in their environment. At the last STAC Summit, David described the bootstrap problem this presents: how do you measure the accuracy of the reference device? That is, what’s the reference for the reference? He then outlined a methodology for measuring the absolute accuracy of network timestamps to the nanosecond. However, the only results he provided were for timestamps on 1GbE packets. Achieving this level of accuracy for 10GbE—and measuring it throughout the second between pulses of PPS discipline—is a more challenging class of problem. In this talk, David will describe such a methodology—vetted and enhanced by the STAC-TS Working Group—and reveal its first results. |
Innovation Roundup | |
"New, lower-cost, higher-performance trade data recording and playback options from Endace" Tom Leahy, Senior Sales Engineer, Endace |
|
"Resilient Timing Solutions integrating Satelles STL and BroadShield" John Fischer, CTO, Spectracom |
|
"Going with the Flow" Michael Wright, Business Development Manager, Napatech |
|
"100G monitoring: More Bandwidth, New Challenges" Josh Joiner, Director, Solution Engineering, cPacket |
 
Picosecond scale measurements for nanosecond scale trading | |
Across liquid markets, latency and jitter continue to shrink. This means that the differences in latency that can have a material effect on P&L continue to shrink as well. When it’s possible to get from tick to trade in a few hundred nanoseconds, firms start to care about every nanosecond in their information pathways. But one of the challenges in squeezing out nanoseconds is measuring them. For measurement to stay ahead of the measured, we must enter the world of picoseconds. Matthew spends a lot of time in this world, and according to him, it is a strange and interesting place. In this talk, he’ll take us all on a brief tour of the challenges, pitfalls and some unexpected findings along the way. |
Demystifying FPGA design and functional verification in the age of digital intelligence | |
The number of capital markets firms developing FPGA-based solutions is growing. So is the pool of firms considering shifting at least some of their business logic to FPGA. As this trend continues, it’s helpful for technology managers to have as much context as possible about best practices in FPGA development, including both technical and human factors. Those who have not started down the FPGA path may wonder what it really entails. Those already neck deep in FPGA development may wonder how their experiences compare to those of their peers. As an organization that serves a broad swathe of the FPGA developer community, Mentor Graphics has a lot of hard data on these questions, in finance and many other industries. In this talk, Harry will share some of these findings. How are firms maximizing their FPGA development productivity? How do other FPGA shops look in terms of skillsets, team dynamics, technologies, and methodologies? Come to hear the answers and ask questions of your own. |
 
 
PLATINUM SPONSOR
GOLD SPONSORS
MEDIA PARTNERS
About STAC Events & Meetings
STAC events bring together CTOs and other industry leaders responsible for solution architecture, infrastructure engineering, application development, machine learning/deep learning engineering, data engineering, and operational intelligence to discuss important technical challenges in finance.