Contact Us

Ceva NPU Core Targets TinyML Workloads

November 26, 2025

1 min read

NeuPro-Nano NPU for TinyML and Edge AI

Ceva’s NeuPro-Nano delivers up to 200 GOPS in a stand-alone, licensable neural processing unit (NPU) designed for TinyML workloads in power-constrained edge IoT devices. It enables AI acceleration without a host processor, reducing die area by up to 45% compared to traditional designs.

Download the Full NeuPro-Nano Technical Brief  

What You’ll Discover in the Full Brief

  • How NeuPro-Nano accelerates TinyML workloads while minimizing power use
  • The stand-alone CPU integration advantage
  • Advanced features like NetSqueeze lossless compression & 2× sparsity acceleration
  • Supported AI frameworks for rapid deployment
  • Real-world use cases in consumer, industrial, and healthcare IoT

Key Highlights

Stand-Alone Capability

Integrated control and management functions let NeuPro-Nano run AI, feature extraction, and control code without a host CPU.

Power-Efficient AI Acceleration

Optimized for battery-powered devices, NeuPro-Nano processes models more efficiently than MCUs or DSPs.

Flexible, Scalable Design

Supports int4–int32 and FP16/FP32, transformer models, and open AI frameworks like TensorFlow Lite Micro.

Access the Complete NeuPro-Nano Specs

The full document includes:

  • Detailed architecture diagrams & block breakdown
  • Performance comparisons with competing NPUs
  • Quantization and sparsity acceleration techniques
  • Integration options for different MCU and NPU configurations
  • Deployment guidelines for specific industries

 

Get in touch

Reach out to learn how can Ceva help drive your next Smart Edge design

Contact Us Today