Hardware Accelerator
For higher-end devices like drones, service robots, industrial cobots, the Idle Network stack can include the Edge-AI Accelerator IP:
IP Core licensed to OEMs → integrated alongside their main SoC.
Provides 140 FPS inference throughput @ 50 W, nearly 2× faster than equivalent GPU at 60% less power.
Optimized for matrix multiplications (GEMM ops) common in CNNs (Convolutional Neural Nets).
Interfaces with the Idle Compiler to accept quantized model binaries.
This gives OEM devices a dual purpose: they perform their primary function and generate financial yield through Idle tasks when idle.
Last updated