Steatite edge AI systems & accelerators offer real-time visual inference results without the need for high-bandwidth cloud connectivity.
Based on Nvidia CUDA accelerators including Jetson, GeForce, Tesla and Quadro, along with Intel Movidius VPUs.
Real-Time Inference – Running AI models on local hardware results in near-instant inference results.
Power Efficient –Specialist AI acceleration hardware and optimisation techniques helps reduce power consumption.
Frameworks – Regardless of the AI framework used, we can help to get your model running in a live environment.
Harsh Environment – Our Edge AI systems are designed to tolerate shock, vibration, hot, cold, water or dust.
Connectivity – AI Inference at the edge removes the bandwidth requirements associated with cloud-based AI.
Take a look at some of our AI systems and accelerators below, or contact one of the team to learn more.
Power efficient AI-at-the-edge inference systems, boards and modules based on Nvidia Jetson TX2, Nano & Xavier accelerators.
Power efficient AI-at-the-edge inference systems and add-on cards based on Intel Movidius accelerators.
High performance AI inference systems based on Nvidia GeForce and Quadro graphics cards and latest generation Intel CPUs.
High performance AI inference systems, based on Hailo architecture & designed to accelerate embedded AI applications at the edge.
• NVIDIA® Jetson Xavier™ NX GPU.
• GPU-Accelerated AI Computing.
• Edge AI Smart City Platform.
• 15W GbE PoE Support.
• -30°C to 60°C Operating Temp.
• JetPack Supported.
The Axiomtek AIE100-903-FL-NX is a compact and fanless edge AI system with NVIDIA® Jetson Xavier™ NX series SoM, 8GB LPDDR4x memory on board and support for Linux based operating systems.
• NVIDIA® Jetson Nano™ GPU.
• GPU-Accelerated AI Computing.
• Edge AI Smart City Platform.
• 15W GbE PoE Support.
• -30°C to 60°C Operating Temp.
• JetPack Supported.
The Axiomtek AIE100-903-FL is a compact and fanless edge AI system with NVIDIA® Jetson Nano™ series SoM, 4GB LPDDR4 memory on board and support for Linux based operating systems.
• Compact Fanless System
• Deep Learning Acceleration
• NVIDIA® Jetson AGX Xavier™
• 32GB Onboard Memory
• 32GB eMMC Storage
• 24V DC Input
To deliver Artificial Intelligence (AI) at the edge, ADLINK’s DLAP-301-Nano Edge AI platform integrates NVIDIA® Jetson AGX Xavier™ to accelerate deep learning workloads for object detection, recognition and classification.
• M.2 AE Key Form Factor
• 1 x Intel Myriad X VPU
• 4.5W Power Consumption
• OpenVINO Toolkit Support
• -20°C to 60°C Operating Temp
• AI Edge Computing Ready
The Mustang-M2AE-MX1M.2 AE-key card includes one Intel® Movidius™ Myriad™ X VPU, providing an flexible AI inference solution for compact size and embedded systems.
• M.2 BM Key Form Factor
• 2 x Intel Myriad X VPUs
• 7.5W Power Consumption
• OpenVINO Toolkit Support
• -20°C to 60°C Operating Temp
• AI Edge Computing Ready
The Mustang-M2BM-MX2 card included two Intel® Movidius™ Myriad™ X VPU, providing an flexible AI inference solution for compact size and embedded systems.
• Mini PCIe Form Factor
• 2 x Intel Myriad X VPUs
• 7.5W Power Consumption
• OpenVINO Toolkit Support
• -20°C to 60°C Operating Temp
• AI Edge Computing Ready
The Mustang-MPCIE-MX2 card included two Intel® Movidius™ Myriad™ X VPU, providing an flexible AI inference solution for compact size and embedded systems.
• PCIe x 2 Form Factor
• 4 x Intel Myriad X VPUs
• 15W Power Consumption
• OpenVINO Toolkit Support
• Multiple Cards Supported
• -20°C to 60°C Operating Temp
The Mustang-V100-MX4 is a PCIe Gen 2 x 2 card included 4 Intel® Movidius™ Myriad™ X VPU, providing an flexible AI inference solution for compact size and embedded systems.
• 2 x Intel Myriad X VPUs
• Low Power Design
• OpenVINO Toolkit Support
• M.2 2280 B+M Key Form Factor
• Complies With CE/FCC Class A
• AI Edge Computing Ready
The EGPA-I201 enhances all vision inference applications such as facial recognition, vehicle registration plate recognition, and many other machine vision applications. Thanks to its low power consumption and high performance, . . .
• MXM 3.1 Type B Form Factor
• 2048 NVIDIA CUDA Cores
• 6.4 TFLOPS SP Performance
• 16GB GDDR5 Memory
• 192GB/s Memory Bandwidth
• 100W Maximum Power
Meeting the needs of embedded, ruggedised, and mobile system builders, the EGX-MXM-P5000 utilises Quadro Pascal architecture to deliver superior graphics and computing performance.
• MXM 3.1 Type B Form Factor
• 1280 NVIDIA CUDA Cores
• 3.9 TFLOPS SP Performance
• 4GB GDDR5 Memory
• 168GB/s Memory Bandwidth
• 75W Maximum Power
Meeting the needs of embedded, ruggedized, and mobile system builders, ADLINK’s EGX-MXM-P3000 Embedded MXM GPU Module is specifically purposed to accommodate form factors incompatible with conventional PCI Express cards, and . . .
• MXM 3.1 Type A Form Factor
• 768 NVIDIA CUDA Cores
• 2.3 TFLOPS SP Performance
• 4GB GDDR5 Memory
• 96GB/s Memory Bandwidth
• Supports Up to 4 FHD Displays
ADLINK’s EGX-MXM-P2000 Embedded MXM GPU Modules features advanced NVIDIA Quadro GPU with NVIDIA Pascal™ Architecture technology in MXM 3.1 Type A form factor. The EGX-MXM-P2000 has 768 NVIDIA CUDA cores . . .