News & insights

Evidence-led updates from the forefront of financial technology

At STAC, we don’t chase headlines—we deliver the stories that matter. From breakthrough benchmark results to research insights, new member activity, and event highlights, this is where the capital markets tech community stays informed. 

Grounded in what stac does best

Every update is grounded in what STAC does best: cutting through noise to provide credible, data-driven knowledge that helps firms move faster and think smarter. 

Whether you're a long-time member or just discovering STAC, this is your window into the conversations, innovations and milestones shaping performance in finance and technology. 

Latest Articles

Guest Article
Posted: November 3, 2025
Should you trust LLMs?
Guest Article
Posted: October 31, 2025
How do you ensure ROI on an AI Rollout?
STAC Blog
Posted: October 30, 2025
How STAC Benchmark Test Harnesses Catch Critical Bugs Before Production
Guest Article
Posted: October 30, 2025
When Securing the Message Isn’t Securing the Medium (Part 2)
Guest Article
Posted: October 29, 2025
Securing the Glass Beneath the AI Boom (Part 1)
STAC Blog
Posted: October 28, 2025
Testing the limits of machine learning inference in finance with NVIDIA & Supermicro
Guest Article
Posted: October 21, 2025
Agentic AI: What Is It and Is It About To Join Your Team?
Guest Article
Posted: October 20, 2025
How can you use an LLM effectively?
Guest Article
Posted: October 6, 2025
How to Implement an LLM in a Firm
Guest Article
Posted: October 3, 2025
Are you forcing every problem into an LLM-shaped hole?
Guest Article
Posted: October 1, 2025
Key Elements to an AI Strategy

Latest REPORTS

STAC Report
Posted: October 28, 2025
STAC-AI™ LANG6 on HPE ProLiant DL380a and NVIDIA H200 GPUs
STAC Report
Posted: October 28, 2025
STAC-AI™ LANG6 on HPE ProLiant DL384 and NVIDIA GH200 Superchips
STAC Report
Posted: October 27, 2025
STAC-M3 Benchmark Results: KX kdb+ 4.1 on Intel, Supermicro, Micron,
Unaudited Report
Posted: October 24, 2025
STAC-A3 Unaudited report by Curated Time Technology
STAC Report
Posted: October 13, 2025
STAC-ML™ Markets (Inference) new world record set by NVIDIA and Supermicro
STAC-ML™ Markets (Inference) on a NVIDIA GH200 Grace Hopper Superchip in a Supermicro server
Unaudited Report
Posted: July 29, 2025
STAC-AI™ LANG6 on NVIDIA GB200 Grace Hopper Superchip
NVIDIA publish unaudited STAC-AI inferencing benchmark results for GB200
Unaudited Report
Posted: July 2, 2025
STAC-AI™ LANG6 on NVIDIA GH200 Grace Hopper Superchip
NVIDIA publish unaudited STAC-AI inferencing benchmark results for GH200
Unaudited Report
Posted: May 19, 2025
STAC-A2 Risk Computation on 2x Intel 6980P Processors with RDIMMs
Intel recently performed the STAC-A2 Benchmark tests on a stack consisting of 2 x Intel Xeon 6980P P
STAC Report
Posted: May 13, 2025
STAC-A2 Pack for oneAPI (Rev R) with 2 x Intel Xeon 6980P Processors, Micron MRDIMMs and Red Hat Enterprise Linux 9.5
STAC recently performed STAC-A2 Benchmark tests on a stack consisting of 2 x Intel Xeon 6980P Proces
STAC Report
Posted: April 6, 2025
Extending STAC-ML with Gradient Boosted Tree Models
STAC has completed a Proof of Concept (POC) benchmark, extending the STAC-ML™ Markets (Inference) to
Unaudited Report
Posted: December 19, 2024
STAC-AI™ LANG6 on NVIDIA GB200 Grace Hopper Superchip
NVIDIA publish unaudited STAC-AI inferencing benchmark results for GB200

Latest RESEARCH

Research Note
Posted: August 6, 2025
LLM-Based RAG Evaluation Metrics
LLM-Based RAG Evaluation Metrics: Model Relatedness and Consistency
Research Note
Posted: May 8, 2025
Comparing LLM Benchmarking Frameworks
We recently conducted a study comparing multiple LLM benchmarking frameworks, including the STAC-AI™
Research Note
Posted: April 20, 2025
Performance And Efficiency Comparison Between Self-Hosted LLMs And API Services
We recently conducted a study comparing self-hosted LLMs and equivalent API models using the STAC-AI
Research Note
Posted: February 13, 2025
LLM Model Serving Platform Comparison
We recently conducted a study comparing two model-serving platforms - vLLM and Hugging Face’s TGI
Sign up to
our newsletter