NPU IP family for generative and classic AI with highest power efficiency, scalable and future proof NeuPro-M™ redefines high-performance AI (Artificial Intelligence) processing for smart...
A Comprehensive Inferencing Graph AI Compiler for SensPro, NeuPro and Ceva-XM processors
The Ceva Deep Neural Network (CDNN) is a comprehensive AI compiler technology that creates fully-optimized runtime software for SensPro sensor hub DSPs, NeuPro-M AI Processor Architectures and Ceva-XM Vision DSPs. Targeted for mass-market embedded devices, CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized compute CNN and RNN libraries into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing.
The CDNN AI compiler enable an extremely simple and streamlined transition of existing deep neural networks to an embedded environment.
NeuPro AI processor ensure superior performance with minimal power consumption. Separately, each component of the CDNN AI compiler is a powerful enabler of embedded imaging and vision applications. Combined, these pieces deliver an ultimate toolkit to support new network structures and changing layer types of deep neural networks.
Ceva supplies a full development platform for partners and developers based on the SensPro, NeuPro and Ceva-XM architectures to enable the development of deep learning applications using the CDNN, targeting any advanced network.
The CDNN AI compiler streamlines implementations of deep learning in embedded systems by automatically quantizing and optimizing offline pre-trained neural networks to real-time embedded-ready networks for SensPro, NeuPro and Ceva-XM cores and customer neural network engines. This enables real-time, high-quality image classification, object recognition, and segmentation, significantly reducing time-to-market for running low power machine learning in embedded systems
Automatic quantization and conversion to embedded-ready networks
Greatly reduces memory bandwidth for any network via various mechanisms including layer fusion and compression
Enables heterogeneous computing architectures, optimizes for and enables seamless utilization of custom AI engines
- CDNN Compiler converts pre-trained neural network models and weights from offline training frameworks (such as Caffe or TensorFlow) to real-time network models
- CDNN Run-Time software accelerates deployment of machine learning in low-power embedded processors
- CDNN-Invite API enables seamless incorporation and usage of custom AI engines within the CDNN framework
CDNN Graph Complier Demo
The Ceva Deep Neural Network (CDNN) is a comprehensive graph compiler that simplifies the development and deployment of deep learning systems for mass-market embedded devices. CDNN incorporates a broad range of network optimizations, advanced quantization algorithms, data flow management and fully-optimized CNN, RNN, and other type networks into a holistic solution that enables cloud-trained AI models to be deployed on edge devices for inference processing, using low compute and memory resources. CDNN enables heterogeneous computing and is flexible to split a network between multiple compute engines such as SensPro or NeuPro processors and custom AI Engines, to ensure superior performance with minimal power consumption. In this video, we are showing how the CDNN Graph Compiler and GUI enable users to quickly configure the CDNN tool and easily analyze their neural networks performance on any of Ceva’s AI processors. The example we are using is inferencing of the SSD mobilenet network on the SP500 DSP, both natively (DSP only), and also with a hardware accelerator connected via CDNN-Invite API for higher performance.