Our IP inference accelerators boost AI performance, enabling faster, more efficient processing for tasks like object detection, NLP, and image recognition.
Configurable CNN accelerator for fast and efficient inference on VGG16, AlexNet, ResNet, EfficientNet-Lite, YOLOv6-Nano, and custom models with up to 8-bit precision.
Configurable transformer accelerator for fast, efficient inference on LLaMA and custom models, supporting INT8, INT4, FP8, FP16, BF16