Machine learning (ML) had an unprecedented impact on 2020, specifically when detecting and tracking the COVID-19 virus through data processing. Researchers benefitted from using ML to analyze massive amounts of information and create conclusions on a large quantity of people’s general health.
As for 2021, there could be great potential for ML applications in quantum computing, robotics, and edge-based AI, to name a few.
A high-level flowchart of machine learning AI process steps. Image used courtesy of sustAGE
At the core of these applications is hardware. In particular, three hardware-focused ideas are essential to ML hardware development: designing toward the edge, low-power architecture, and compatibility with ML frameworks.
Pushing ML to the Edge: On-Device AI
Intelligence at the edge is becoming more necessary when considering the massive amounts of data being used and processed. When designing for edge AI, designers must consider many constraints like power, board space, and computation time.
On-device AI solves some of these issues, allowing for localized processing, which helps reduce the strain of cloud computing while also being faster and more power-efficient. Many manufacturers realize this benefit and are attempting to include on-device AI in various applications like smartphones, vehicles, and IoT devices. By designing with the edge in mind, engineers can give products a competitive advantage when it reaches the market.
A recent development in hardware for AI at the edge is LG’s LG8111, an SoC and development board. This SoC and development board includes an LG-specific AI processor and an AI accelerator. Together, these devices support various AI processing functions like voice, video, image, and control intelligence.
LG8111 SoC and development board. Image used courtesy of LG
The chip also supports ASW IoT Greengrass, thus allowing this SoC and development board to host a variety of applications and solutions, depending on the device.
Low-Power Architecture With DSP and NN Processor
Power is one of the most important considerations when designing at the edge. Machine learning deals with massive amounts of data; thus, eliminating power waste while processing is necessary when designing a system.
One way to achieve a low-power architecture is by using both a low-power digital signal processor (DSP) and a dedicated NN (neural network) processor. DSP Group put this low-power scheme to action with its new DVM10 DSP and nNetLite NN processor. This structure allows for varying power between both processors, depending on the algorithms and frameworks installed.
This set up also allows the processors to split up the process of reading data and specified tasks, which can lead to less power consumption than overloading one processor with all of the tasks.
Supported applications on the DBM10 and a look at the actual SoC. Image used courtesy of DSPG
This combination of processors allows the SoC to support ultra-low-power inference at ~500 μW, which is typical for most voice NN algorithms.
Compatibility with ML Frameworks
Though programming and software applications seem separate from hardware designing, it is becoming increasingly more of a gray area, especially in ML. Because of this, it is necessary to know what frameworks the device will be using. Depending on the needs of the product or the user, it could be beneficial to have a processor that can be compatible with various ML frameworks.
Ambarella’s CV5 processor is a recent example of framework compatibility. The CV5 is compatible with common ML frameworks like Caffe, PyTorch, TensorFlow, and ONNX. This flexibility in framework compatibility gives the user multiple options to integrate their neural networks into the device.
ML in 2021: Quantum Machine Learning?
One major trend predicted in 2021 is the integration of machine learning with quantum computing, dubbed “quantum machine learning.” According to the Quantum Daily, quantum machine learning refers to “a field that aims to write quantum algorithms to perform machine learning tasks.”
Some machine learning algorithms are too complex and labor-intensive for classical computers to process. Using quantum ML, researchers can translate classic ML algorithms into a quantum circuit, allowing them to run effectively on a quantum computer.
Classical machine learning (CML) vs. quantum machine learning (GML). Image used courtesy of ICFO
This new field could pave the way for the commercialization of quantum computing while enhancing the benefits of machine learning we saw this last year.
With the pandemic still ongoing, the need for fast, accurate data processing is imperative. By expanding and evolving ML with board-level design choices, designers can push ML to the edge and address the ever-increasing burden of data processing.