The tables presented here detail the comparability of systems under test (SUTs) running the STAC-ML™ Markets (Inference) benchmark Tacana suite. SUT benchmark results are deemed either comparable or not comparable based on the error metric (STAC-ML.Markets.Inf.T.[Model].ERR.v1), which is the absolute difference between the SUTs inference values and the corresponding inference values obtained by the Quality Reference Implementation.