As Artificial Intelligence (AI) marches into almost every aspects of our lives, one of the major challenges is bringing this intelligence to small, low-power devices. This requires embedded platforms that can deliver extremely-high Neural Network performance with very low power consumption. However, that’s still not enough.
Machine Learning developers need a quick and automated way to convert and execute their pre-trained networks on such embedded platforms. In this session, we will discuss and demonstrate tools that complete this task within few minutes, instead of spending months on hand porting and optimizations.
Join Ceva experts to hear about:
- Overview of the leading deep learning frameworks, including Caffe and TensorFlow
- Various topologies of neural networks, including MIMO, FCN, MLPL
- Overview of most common neural networks such as Alexnet, VGG, GoogLeNet, ResNet, SegNet
- Challenges in porting neural networks to embedded platforms
- Ceva “Push button” conversion approach from pre-trained networks to real-time optimized
- Programmer Flow for CNN Acceleration