The appeal of putting AI in embedded applications is obvious, for example using face-id to authorize access to machine controls on a factory floor. Facial recognition (Human Presence Detection), voice control, anomaly detection, with AI there are so many possibilities. I’ll use face-id as an example in this blog. So much easier to use, more intelligent and more robust than traditional human-machine interfaces and passwords. Not to mention that everyone else is doing it. How AI works may seem magical, but what it can do is fast becoming a minimum expectation. No one wants to evaluate products transparently based on yesterday’s technology.
There’s a problem for a product builder. AI-based development is quite different from standard embedded development. You aren’t writing software, at least for the core function. You have to train a neural net to recognize patterns (like images), just as you would train a child in school. Then you must optimize that net to the constrained footprint of your embedded device, to meet size and power goals. Neural nets may not be conventional code, but the net and its calculations still consume memory and burn power. As an embedded developer you know how important it is to squeeze these metrics as much as possible. I’ll get to this in my next blog. For now let’s understand at least some of how these neural nets work.
I don’t want to walk you through a lengthy explanation of neural nets; just what you’re going to have to do to make your application work. A neural net is conceptually a series of layers of “neurons”. Each neuron reads two (or more) inputs from a previous layer or the input data, applies a calculation using trained weights and feeds forward a result. Based on these weights, a layer detects features, progressively more complex as you move through layers, eventually recognizing a complex image at the output.
The first clever part then is in designing the net – how many layers, connections between the layers and so on – the core neural net algorithm. The second clever part is in training. This is a process in which many images are run through the net, with labeling to identify what should be recognized. These runs build up the weight values needed for recognition.
If you feel ambitious, you might build your own neural net from scratch for one of the standard networks such as TensorFlow. You could also start from an open-source option such as this one for face-id. You can build all of this into an app which can run on a laptop, which will be handy for customers who want to register new approved faces. Now you can start training your network with a test set of approved faces in multiple poses.
Why not just do this in the cloud?
There are services that will do face recognition online – no need to get into messy AI on your device. Just take the picture, upload it to the cloud, the app transmits back an OK and your product approves the next step.
But – all your approved employees need to have their photos and other credentials in the cloud. Maybe not such a great idea for security and privacy. You’ll burn quite a bit of power communicating the image to the cloud every time a worker wants access to a machine. And if your Internet connection is down, no-one can be approved until it comes back up. Doing authentication right on the device preserves privacy and security, keeps the power demand low and continues to work even when the network connection is down.
Up next – embedding your trained network
Now you have the AI hard part done, you have to download it to your device. That’s an interesting step in its own right, where you’ll definitely need help from your AI platform. I’ll talk about that more in my next blog. Meanwhile, for more information, check out “Deep learning for the real-time embedded world.”
Published on embedded.com.
You might also like
More from Deep Learning
In recent times, the need for real-time decision making, reduced data throughput, and privacy concerns, has moved a substantial portion …