The latest Google I/O conference highlighted the fact that artificial intelligence (AI) is one of Google’s main focuses. And not just in reference to research-oriented, futuristic projects, like delivery drones. Google has made it quite clear that AI is coming into consumer homes and handheld devices now, in 2016, as my colleague expressed in this recent post. Certainly, Google isn’t the only company trying to take advantage of the power of AI. Microsoft, Facebook, Amazon, Baidu, and others are all competing to create the framework that will make this possible.
Neural networks – the brain in the machine
Neural networks are the human brain’s method of learning. The networks develop over time as sensory data, like sounds and images, are collected from experience. The brain learns things by storing this sensory data into an intricate network that can later be used to effectively perform many different tasks. Examples of these tasks are identifying objects, speaking and understanding spoken language, and solving complex problems, to name a few. The brain’s amazing capacity to perform these tasks efficiently and with a low margin of error has drawn machine learning experts to try to mimic its capabilities by artificial means.
Until recently, it has been quite clear that humans have a vast advantage over machines in those areas. Now, constant advances are being made at a dizzying pace, bringing that assumption under question. For example, a few months ago, a computer program defeated one of the top ranking players in world, at the extremely complex game of Go. At the time, this level of AI was estimated by experts to be at least a decade away. This was achieved by using deep neural networks, and by a company owned by Google. Now neural networks are used for an enormous array of applications, from autonomous vehicles to facial recognition. Every big company in the industry has a large team of developers working on machine learning, and each one has its own framework to support the software development.
Open source deep learning is where the action is
One of the most effective tools for advancing these machine learning software modules is making them open-source. Caffe, developed by the Berkeley Vision and Learning Center, was one of the pioneering open-source deep learning frameworks. The thriving community of developers that use and enhance Caffe apparently paved the way for all the commercial tech giants to want to join in.
In the last year or so, many of the big companies with AI technology have released their modules as open-source code. In September last year, Google announced that it would open its TensorFlow software library. Early this year, Microsoft shared its Computational Network Toolkit (CNTK) on GitHub, and Facebook AI Research (FAIR) shared their deep learning modules on Torch. The latest of this series of releases is Amazon’s framework, Deep Scalable Sparse Tensor Network Engine (DSSTNE, pronounced destiny), which they just opened on GitHub in May this year.
Until very recently, all these frameworks were highly guarded secrets. Now, in an attempt to attract and create developer communities and eco-systems, each company has unleashed their software, and with it the potential to create infinite AI applications.
Bringing neural networks to embedded platforms
As AI advances into multiple aspects of our lives, one of the biggest challenges is bringing this intelligence to small, low-power devices. This requires embedded platforms that can deliver extremely high performance on very low power consumption. But, that’s still not enough. The embedded platform must support one or more of the above frameworks, in order to exploit the strength of the all the open-source modules.
One way to make this happen is with dedicated hardware. A hardware module can be set up with an efficient implementation of the latest libraries, and the silicon can be fabricated and shipped. The problem with this approach is that it misses what all the top companies have understood and put into action. It misses flexibility. AI is still a nascent field of technology. It is constantly evolving and improving, and its full potential is still far from being realized. Every moment, swarms of talented scientists and developers are making advances and creating tools to enhance the possibilities of AI. That’s why the only way to really harness the power of these open source frameworks is to use a flexible, programmable software solution.
Want to learn more?
- Click here to find out about CDNN, CEVA Deep Neural Networks – the advanced, low-power, embedded solution for machine learning
- Check out the CEVA-XM4 intelligent vision processor
You might also like
More from Deep Learning
The AIPC is Reinventing PC Hardware
We first started hearing about AI-enabled PCs (AIPCs) from Microsoft. As a platform, PCs may seem a mature and unpromising …
Bringing Power Efficiency to TinyML, ML-DSP and Deep Learning Workloads
In recent times, the need for real-time decision making, reduced data throughput, and privacy concerns, has moved a substantial portion …
Human Presence Detection and You
Mobile phones and tablets are getting more powerful, but if you’re serious about doing work (remotely), the dedicated work laptop …