- Solutions
- Products
- Resources
- Company
Investor RelationsFinancial Information
- Careers


Webinar on-demand
The Ceva-NeuPro-M Neural Processing Unit (NPU) IP family delivers exceptional energy efficiency tailored for edge computing and cloud inference while offering scalable performance to handle AI models with over a billion parameters. Its innovative architecture, which has won multiple awards, introduces significant advancements in power efficiency and area optimization, enabling it to support massive machine-learning networks, advanced language and vision models, and multi-modal generative AI. With a processing range of 2 to 256 TOPs per core and leading area efficiency, the Ceva-NeuPro-M optimizes key AI models seamlessly. A robust tool suite complements the NPU by streamlining hardware implementation, model optimization, and runtime module composition.
Compatibility with Open-Source AI Frameworks: Including TVM and ONNX
The Ceva-NeuPro-M NPU IP family is a highly scalable, complete hardware and software IP solution for embedding high-performance AI processing in SoCs across a wide range of edge and cloud AI applications.
The heart of the NeuPro-M NPU architecture is the computational unit, scalable from 2 to 32 TOPs.
A single computational unit includes:
A core may contain up to eight computational units, along with a shared Common Subsystem comprising:
NPU cores can be grouped into multi-core clusters to reach performance levels of thousands of TOPs

With enormous scalability and the Ceva-NeuPro Studio full AI software stack, the NeuPro-M family is the fastest route to a shippable implementation for an edge-AI chip or SoC.

Reach out to learn how can Ceva help drive your next Smart Edge design