From an electrical engineering perspective, robotics is a challenging field because it requires extremely low latency computation. In most cases, roboticists want their robots to respond in real-time to the stimuli they’re presented.
However, a robotic system’s decision-making process generally looks something like this: first, it must assess its environment via sensors and cameras; second, it must map the environment and localize itself; and finally, it must decide on a course of action. Only after all of these steps are completed does the system actually follow through on an action.
Robots must perform simultaneous localization and mapping (SLAM). Image used courtesy of SIFSOF
The problem here is that the robotic system is dealing with immense amounts of data capture, processing, and analysis—before it can even move. This is not conducive to the real-time decision-making roboticists desire.
While some developers have turned to software optimization to address this issue, one group of researchers at MIT have come up with a hardware-based solution.
A Brief History of Hardware Acceleration
To understand how MIT researchers thought outside of the box for real-time robotics motion, it may first be useful to go back to NVIDIA’s popularization of the GPU in 1999.
The idea was simple: the standard CPU was a solid multipurpose device, decent at all tasks and excellent at none—a hardware jack of all trades. But, when graphics started to become an industry standard, and computing devices needed to process millions of pixels simultaneously, something had to change.
So, the GPU was invented—a device meant explicitly for parallel processing. It wouldn’t provide lower latency for individual tasks than a CPU, but its throughput would blow the CPU out of the water. This made the device useful for graphics processing in a way that a CPU could never achieve.
NVIDIA GeForce 256, the company’s first GPU. Image used courtesy of Wikimedia Commons
Robomorphic Computing: Hardware Acceleration for Robots
Following this same school of thought, researchers at MIT decided to speed up robotic computation by introducing robots with individualized hardware acceleration.
The thought process was the same: if all robots have different environments, different functional capabilities, and different tasks, why should they all utilize the same broad processing units? The “jack of all trades” approach was simply not the best option.
This idea of introducing specialized hardware for an individual robot was dubbed “robomorphic computing” by the researchers.
How Robomorphic Computing Works
The researchers achieved robomorphic computing by creating a software system that builds customized hardware given a robot’s unique features. The user would enter information about the robot into the software, including its limb layout and degrees of freedom for each joint.
Robomorphic computing flow chart. Image used courtesy of Neuman et al.
The system then organizes the data in a sparse matrix of these parameters, which is then used to determine the best hardware architecture for the robot. According to the university press release, the system “exposes parallelism in algorithm loops iterating over robot limbs and links, and maps it to parallel processing elements in the hardware template.”
In this way, the system designs a hardware architecture specialized to achieve maximum efficiency for the specific robot’s needs.
The team did not fabricate an ASIC for their robots, but rather an FPGA. Despite operating at a slower clock speed, their FPGA outperformed the CPU by eight times and outperformed the GPU by 86 times.
Robots With Specialized ASICs
MIT says they are the first to bring individualized hardware acceleration to the world of robotics, and their software and customized FPGA had a significant role in that achievement. According to the team who coined “robomorphic computing,” they can envision a future in which every robot has its own specialized ASIC.