There has not been a better time to supercharge your robots. Meet RobotCore framework, a new open architecture for hardware acceleration in ROS2 to assist you with that. This article describes the powerful capabilities of RobotCore framework and how it can take your robots to the next level
The RobotCore framework helps in leveraging hardware acceleration and building custom compute architectures for robots, or IP cores, which make the robots faster, more deterministic, and/or power efficient. RobotCore is a robot-specific processing unit that helps map Robot Operating System (ROS) computational graphs to its CPUs, GPU, and FPGA efficiently to get the best performance. The RobotCore framework is intended to be a modular and extendable framework.
Before moving forward, let us be clear about hardware acceleration. Hardware acceleration is the process of offloading certain computational task to specialised hardware so that you can perform those tasks more efficiently. For example, let us say you are using a simulation software or playing a game, then you would want graphical processing units (GPUs) in your system that could handle the render with ease.
That is what GPU-enabled hardware acceleration is when it comes to simulation software and games. In the context of robotics, hardware acceleration can help you create faster and more power-efficient robots. This is done through various accelerator platforms.
Let us now understand how hardware acceleration is used in robotics. We all have seen the Atlas robot by Boston dynamics. The Atlas uses hardware acceleration in the perception stack to navigate through the obstacles. It uses a time of flight sensor at high frequency to detect and extract surfaces from the environment and then use the navigation stack to navigate and control stack to move through the environment.
It is important to know that all of this has to happen at the edge as you cannot have these computations in the cloud, because the environment is dynamic and constantly changing. The obstacle here is that you cannot afford the latency that cloud computations would have.
This is where the edge devices and special hardware acceleration come in because edge devices by themselves are not fast enough to deal with this situation. That is where you put the hardware accelerator into the mix and get optimal performance.
Let us first look at the compute architectures and how they are used in the robotics context. We can do this by taking the analogy of a factory. A CPU can be thought of as a very generalised workshop where you have very general tools available and you can create anything you want from it. The only bottleneck is that any sort of computation happens sequentially and not parallelly. You can build anything but it just takes some time because things happen sequentially.
Then we have the GPUs, which are basically a very streamlined form of a workshop where there are a lot of workers available but the workers can do only certain tasks. They have a very limited number of tools available to them. GPUs can perform tasks which have a lot of data to deal with parallelly.
From a software engineering perspective, the CPU deals with data points while the GPU deals with data vectors. Therefore, they can parallelly process a lot of computations together. Now, the problem with this is that GPUs require a hardware system expert to develop the architecture to decide what sort of tools are available to an individual worker. That drives up the cost of a GPU. Also, it is not power efficient.
Field programmable gate arrays (FPGAs) are basically factories which can be transformed to do a specific task. Through programs like OpenCL, you can re-configure your factory and produce a specific factory for the product you want to design. The benefit of FPGAs is that they consume low power and give high performance. If you look at the current literature in robotics perception then you will see that FPGAs outperform GPUs in many tasks.
We also have application-specific integrated circuits (ASICs), which are very powerful as well as performance efficient. The problem is that they require high level of hardware expertise and the architectures that are available for ASIC are few in number. Another problem with ASIC adaptation in robotics is that all of the subfields of robotics like localisation, perception, and control are currently the fields of active research, hence, it lacks a final architecture. Therefore, using ASICs becomes a problem because they have a very specific architecture that needs to be defined. They are also really expensive to develop, so currently it is not really worth the effort.
Production-grade multi-platform ROS support with Yocto
Instead of relying on common development-oriented Linux distros (such as Ubuntu), our contributions to Yocto allow to build a customised Linux system for your use case with ROS, providing unmatched granularity, performance, and security.
The Robot Operating System (ROS) is the de facto framework for robot application development. It is a toolset to build and manage robots through libraries for robotics debugging and visualisation utilities, orchestration tools, and communication piping. Nearly 55% of total commercial robots shipped in 2024 will have at least one ROS package installed, creating a large installed base of ROS-enabled robots.
ROS is inherently CPU-centric. There’s an opportunity to build custom compute architectures for robots and make them faster with hardware acceleration. Most ROS packages are subject to hardware acceleration and licensed for commercial use.
RobotCore framework: an open architecture for hardware acceleration in ROS2
RobotCore provides a development, build, and deployment experience for creating robot hardware and hardware acceleration kernels similar to the standard, non-accelerated ROS development flow. In the standard data flow of a ROS2 application, passing it through a RobotCore framework leaves you with two benefits. At the node level you get acceleration and you also get inter-process acceleration.
RobotCore framework targets three different verticals in the customer market. First being the development packages for robotics, second being package maintenance, and third is semiconductor manufactures. The robotics division of the semiconductor companies would want to follow the RobotCore framework as it has a huge community acceptance.
Semiconductor companies would also benefit as they will be able to get their products to market quicker. Currently, FPGA GPUs are very hot in the robotics market and that is where silicon vendors could benefit tremendously with this framework.
One key thing that has been done to get more acceptance of hardware acceleration in the robotics community is to create a hardware acceleration working group as well. This includes accelerator vendors such as AMD Xilink, NVIDIA, and Analog devices.
ROS2 perception graph
RobotCore perception is an optimised robotic perception stack built with RobotCore framework that leverages hardware acceleration to provide a speedup in your perception computations. API-compatible with ROS2 perception stack, RobotCore perception delivers high performance, real-time, and reliability to your robots’ perception.
From RobotCore framework to an open standard
All the silicon vendors like AMD, NVIDIA, and Microchip form the bottom layer as they enable users to use their chips. Above that lies the RobotCore framework where we basically create the build tools to quickly be accepted into the ROS layer. On top of this layer sits the UDP/IP layer, which essentially is the middleware framework for ROS itself. At the top sits the application itself (see Fig. 2). RobotCore framework enables the bottom two layers so that it becomes easy for roboticists to develop applications.
Benchmarking performance in ROS2
The purpose of open-sourcing the RobotCore framework is to enable competition in the market itself as silicon vendors would therefore develop chips that eventually help the roboticists to make better robots. We are also enabling this through a benchmarking effort. We perform tests on FPGA brought to us by silicon vendors themselves.
RobotCore framework is a new open architecture for hardware acceleration in ROS2 that promises to revolutionise the world of robotics. This innovative technology allows developers to build high-performance robots with faster and more efficient processing, paving the way for advanced robotics applications. With the RobotCore framework, the future of robotics is looking brighter than ever.
This article is based on a tech talk at India Electronics Week 2022 by Gaurav Vikhe, Chief of Product, Acceleration Robotics. It has been transcribed and curated by Laveesh Kocher, a tech enthusiast at EFY with a knack for open source exploration and research.
At larkbiz.biz we participate in the Amazon Services LLC Associates Program, which is an affiliate advertising program designed to provide a means for websites to earn advertising fees by advertising and linking to amazon.com
As an Amazon Associate, we may earn commissions from qualifying purchases from Amazon.com. You can learn more about our editorial and affiliate policy