STAC Research Note: STAC-ML Markets (Inference) Processor Comparisons in Azure

We looked at six SUTs comprising large Microsoft Azure VMs featuring processors from 3 processor vendors. For each processor, we tested both latency- and throughput-optimized configurations of the STAC-ML naive inference implementation with ONNX as the inference engine. We used the STAC-ML Test Harness to quickly and easily find the optimal configurations for each SUT.

There was not a single “winner”. Performance and business use-case analyses showed that each VM carved out optimal price-performance niches at various points along the latency, throughput, and cost spectrums. We also examined the consistency of performance and the impact of ONNX multithreading on boosting performance.

The full set of reports compared in this note are:

The note and accompanying data tables (below) detail our findings.

Please log in to see file attachments. If you are not registered, you may register for no charge.

The use of machine learning (ML) to develop models is now commonplace in trading and investment. Whether the business imperative is reducing time to market for new algorithms, improving model quality, or reducing costs, financial firms have to offload major aspects of model development to machines in order to continue competing in the markets.