Navigation is a critical component of any robotic application. This post dives into the two of the most common tools for SLAM navigation: Visual SLAM and LiDAR-based SLAM.
Tuesday, June 11, 2019
A critical component of any robotic application is the navigation system, which helps robots sense and map their environment to move around efficiently. This typically, although not always, involves a motion sensor such as an inertial measurement unit (IMU) paired with software to create a map for the robot.
SLAM (simultaneous localization and mapping) systems determine the orientation and position of a robot by creating a map of their environment while simultaneously tracking where the robot is within that environment. The most common SLAM systems rely on optical sensors, the top two being visual SLAM (VSLAM, based on a camera) or LiDAR-based (Light Detection and Ranging), using 2D or 3D LiDAR scanners.
An IMU can be used on its own to guide a robot straight and help get back on track after encountering obstacles, but integrating an IMU with either visual SLAM or LiDAR creates a more robust solution. So how does each approach differ?
What is Visual SLAM?
The visual SLAM approach uses a camera, often paired with an IMU, to map and plot a navigation path. When an IMU is also used, this is called Visual-Inertial Odometry, or VIO. Odometry refers to the use of motion sensor data to estimate a robot ‘s change in position over time. While SLAM navigation can be performed indoors or outdoors, many of the examples that we ‘ll look at in this post are related to an indoor robotic vacuum cleaner use case.
Typically in a visual SLAM system, set points (points of interest determined by the algorithm) are tracked through successive camera frames to triangulate 3D position, called feature-point triangulation. This information is relayed back to create a 3D map and identify the location of the robot. An IMU can be added to make feature-point tracking more robust, such as when panning the camera past a blank wall. This is important with drones and other flight-based robots which cannot use odometry from their wheels.
After mapping and localization via SLAM are complete, the robot can chart a navigation path. Through visual SLAM, a robotic vacuum cleaner would be able to easily and efficiently navigate a room while bypassing chairs or a coffee table, by figuring out its own location as well as the location of surrounding objects.
A potential error in visual SLAM is reprojection error, which is the difference between the perceived location of each set point and the actual set point. Camera optical calibration is essential to minimize geometric distortions (and reprojection error) which can reduce the accuracy of the inputs to the SLAM algorithm.
What is LiDAR?
A LiDAR-based SLAM system uses a laser sensor paired with an IMU to map a room similarly to visual SLAM, but with higher accuracy in one dimension. LiDAR measures the distance to an object (for example, a wall or chair leg) by illuminating the object with multiple transceivers. Each transceiver quickly emits pulsed light, and measures the reflected pulses to determine position and distance.
Because of how quickly light travels, very precise laser performance is needed to accurately track the exact distance from the robot to each target. This requirement for precision makes LiDAR both a fast and accurate approach. However, that ‘s only true for what it can see. One of the main downsides to 2D LiDAR (commonly used in robotics applications) is that if one object is occluded by another at the height of the LiDAR, or an object is an inconsistent shape that does not have the same width throughout its body, this information is lost.
Selecting the Right Navigation Method
When deciding which navigation system to use in your application, it ‘s important to keep in mind the common challenges of robotics. Robots need to navigate different types of surfaces and routes. For example, a robotic cleaner needs to navigate hardwood, tile or rugs and find the best route between rooms. Specific location-based data is often needed, as well as the knowledge of common obstacles within the environment. For example, the robot needs to know if it ‘s approaching a flight of stairs or how far away the coffee table is from the door.
Both visual SLAM and LiDAR can address these challenges, with LiDAR typically being faster and more accurate, but also more costly. Visual SLAM is a more cost-effective approach that can utilize significantly less expensive equipment (a camera as opposed to lasers) and has the potential to leverage a 3D map, but it ‘s not quite as precise and slower than LiDAR. Visual SLAM also has the advantage of seeing more of the scene than LiDAR, as it has more dimensions viewable with its sensor.
Whether you choose visual SLAM or LiDAR, configure your SLAM system with a reliable IMU and intelligent sensor fusion software for the best performance. Contact us if you need advice on how to approach this type of design, else or download our ebook, “Unlocking the Robotic Cleaner of Tomorrow“.
You might also like
More from Sensor fusion
Evaluating Spatial Audio -Part 3- Creating a repeatable system to evaluate spatial audio
This is Part 3 of our deep dive into ‘Evaluating Spatial Audio. In Evaluating Spatial Audio - Part 1 - Criteria …
Evaluating Spatial Audio – Part 2 – Creating and Curating Content for Testing
In Part 1 of this topic of Evaluating Spatial Audio, we talked about what constitutes a ‘spatial audio’ product system, …
Evaluating Spatial Audio – Part 1 – Criteria & Challenges
We here at Ceva, have spoken at length about spatial audio before, including this blog post talking about what it …