- Solutions
- Products
- Resources
- CompanyInvestor Relations
Financial Information
- Careers
Harnessing neural networks to accelerate inference at the edge. Artificial Intelligence (AI) is changing our world, and Edge AI brings its power even closer to action. By reducing latency and enhancing privacy, Edge AI enables advanced AI capabilities on even the most resource-constrained devices. From autonomous vehicles to wearable health tech and industrial IoT, Edge AI is powering a new generation of smart, real-time decision-making devices.
Ceva offers self-contained Edge Neural Processing Units (NPUs) that operate independently without relying on a host CPU. They’re designed to handle a broad spectrum of AI applications, from the ultra-low-power and always-on requirements of Embedded ML to the high computational demands of Generative AI. Ceva’s scalable NPU family supports AI processing capabilities ranging from tens of GOPS (Giga Operations Per Second) to hundreds of TOPS (Tera Operations Per Second).
This report covers processors and the surrounding ecosystem for artificial intelligence and machine learning at the edge, focusing on embedded systems ranging from TinyML to those capable of hundreds of TOPS.
The report delves into the evolution of edge AI from a niche technology to a mainstream powerhouse catalyzing change across autonomous vehicles, IoT, healthcare, and more. From real-time decision-making in autonomous vehicles to immediate patient monitoring in healthcare, edge AI is setting new standards for safety, efficiency, and performance.
Ceva’s NeuPro-Nano licensable neural processing unit (NPU) targets processors that run TinyML workloads, offering up to 200 billion operations per second (GOPS) for power-constrained edge IoT devices.
A self-contained Edge NPU is a neural processing unit that performs AI inference independently, without relying on a host CPU. This architecture reduces system complexity, minimizes power consumption, and enables always-on embedded AI applications such as audio, vision, and sensor processing, ideal for resource-constrained IoT devices.
Ceva’s Edge AI NPUs support a wide compute range, from ultra-low-power NeuPro-Nano for TinyML workloads to high-performance and throughput NeuPro-M capable of running generative AI models. This scalability allows OEMs and engineers to tailor performance and power efficiency based on specific device needs and use cases.
Industries such as automotive, infrastructure, consumer IoT, and industrial IoT benefit significantly from Ceva’s Edge AI NPUs. Applications range from real-time driver monitoring and predictive maintenance to wearable health tracking and on-device image classification.
By processing AI tasks directly on the device, Ceva’s Edge NPUs eliminate the need to transmit sensitive data to the cloud. This leads to faster response times, improved real-time decision-making, and enhanced data privacy, critical for sectors like automotive and healthcare.
Ceva licenses silicon IP for integration into SoCs along with software toolkits for application development. OEMs and semiconductor companies can accelerate time-to-market with pre-validated, power-efficient NPUs that support industry-standard frameworks and can be deployed across diverse AI workloads.
Get in touch!
Reach out to learn how can Ceva help drive your next Smart Edge design