VOLLO - Ultra-low latency machine learning inference accelerator for latency-critical applications. Runs on Altera Agilex 7 Series FPGAs. Proven over hundreds of thousands of production trading hours in… Myrtle.ai is a leader in low latency AI inference. We combine deep technical expertise with future-looking vision to ensure that ML developers can meet their inference performance goals with our easy… Intel Agilex® 7 FPGAs and SoC FPGAs F-Series Lowest latency machine learning inference accelerator. Outperforms GPUs where latencies of tens of microseconds or less are required. SDK enables ML developers to compile their models and run them on VOLLO directly from PyTorch or TensorFlow and without requiring any FPGA expertise or tools. For those who do have FPGA expertise and tools, an FPGA netlist version is also available. Data Center Cloud (Public, Private, Hybrid) Defense Wireless VOLLO Key Features Offering Brief No No Yes No Intel Agilex® 7 FPGAs and SoC FPGAs F-Series No No Offering Brief Production a1JUi0000049UKLMA2 What's Included SDK includes all IP and tools to compile and run models on VOLLO, whether targeting supported FPGA-based PCIe accelerator cards or SmartNICs, or stand-alone FPGAs. a1JUi0000049UKLMA2 Production Acceleration / AI / Cloud Design Services a1MUi00000BO8sjMAD a1MUi00000BO8sjMAD Select 2026-01-25T17:53:25.000+0000 Ultra-low latency machine learning inference accelerator for latency-critical applications. Runs on Altera Agilex 7 Series FPGAs. Proven over hundreds of thousands of production trading hours in financial trading and also winning in wireless telecoms, network security and defense. Partner Solutions - 2026-03-10

external_document