Mobile robot positioning technology—laser SLAM

04/02/202017:18:23 Comments 1,270

The full name of SLAM in English is Simultaneous Localization and Mapping, which means real-time positioning and map construction.

SLAM was first proposed by Smith, Self, and Cheeseman in 1988, and has more than 30 years of development history.

Mobile robot positioning technology—laser SLAM

Compared to popular vocabulary such as deep learning, neural networks, and big data, fewer people have heard of SLAM, because the number of domestic institutions engaged in related research is even fewer. Until about 2015, SLAM gradually became a popular research direction in the field of robotics and computer vision in China, and it has emerged in the currently more popular fields.

This article is only for the popular science of newcomers who have not been exposed to SLAM.

01

In recent years, mobile robot technology has developed rapidly around the world. People are committed to applying mobile robots in various scenarios, from indoor and outdoor handling robots to service robots to industrial robots. The use of mobile robots has made huge breakthroughs.

Mobile robot positioning technology—laser SLAM

One of the most critical technologies in mobile robot research is real-time positioning and mapping, which is the so-called SLAM technology. SLAM tries to solve such a problem: how a robot moves in an unknown environment, how to determine its own trajectory through observation of the environment, and simultaneously build a map of the environment.

SLAM technology is exactly the sum of many technologies involved in achieving this goal. Because of its important theoretical and application value, many scholars believe that it is the key to achieve a truly fully autonomous mobile robot.

02

The SLAM system is generally divided into five modules: sensor data, visual odometer, back end, mapping and loopback detection.

Sensor data: It is mainly used to collect various types of raw data in the actual environment. Including laser scan data, video image data, point cloud data, etc.

Visual odometer: It is mainly used to estimate the relative position of the moving target at different times. Including the application of feature matching, direct registration and other algorithms.

Back end: It is mainly used to optimize the cumulative error brought by the visual odometer. Including algorithms such as filters and graph optimization.

Mapping: used for 3D map construction.

Loopback detection: mainly used to eliminate spatial cumulative errors

Its workflow is roughly:

After the sensor reads the data, the visual odometer estimates the relative movement at two moments (Ego-motion), the back end processes the cumulative error of the visual odometer estimation results, and the map is built based on the motion trajectories obtained from the front end and the back end to create a map Loop detection considers images at different times in the same scene and provides spatial constraints to eliminate cumulative errors.

At present, SLAM (Instant Positioning and Map Construction) technology is mainly used in fields such as drones, unmanned driving, robotics, AR, and smart home.

03

According to the core functional modules, the current common mobile robot SLAM systems generally have two forms: SLAM based on laser radar (laser SLAM) and SLAM based on vision (Visual SLAM or VSLAM).

Laser SLAM was born out of early ranging-based positioning methods (such as ultrasound and infrared single-point ranging). The emergence and popularity of Lidar (Light Detection And Ranging) makes measurements faster and more accurate, and more informative. The object information collected by the laser radar presents a series of scattered points with accurate angle and distance information, which is called a point cloud. Generally, the laser SLAM system calculates the distance and attitude change of the relative movement of the lidar by matching and comparing the two point clouds at different moments, thus completing the positioning of the robot itself.

Lidar distance measurement is more accurate, the error model is simple, the operation is stable in environments other than direct light, and the processing of point clouds is relatively easy. At the same time, the point cloud information itself contains direct geometric relationships, making the path planning and navigation of the robot intuitive. Laser SLAM theoretical research is also relatively mature, and the landing products are more abundant.

Mobile robot positioning technology—laser SLAM

Vision SLAM is mainly implemented through cameras. There are many types of cameras, which are mainly divided into monocular, binocular, monocular structured light, binocular structured light, and ToF. V-SLAM based on depth camera, similar to laser SLAM, can directly calculate the distance of obstacles through the collected point cloud data; V-SLAM scheme based on monocular and fisheye cameras uses multiple frames of image to estimate its own Posture change, and then calculate the distance from the object by accumulating pose changes, and perform positioning and map construction.

04

Comparison of laser SLAM and visual SLAM

All along, the industry has its own views on who is better at laser SLAM and vision SLAM and who is the mainstream trend in the future. Below is a simple comparison from the following perspectives:

Technological development

As early as 2005, laser SLAM has been thoroughly researched and the framework has been initially determined. Laser SLAM is currently the most stable and mainstream positioning and navigation method; Visual SLAM is currently in further research and development and application scenarios, products Gradually landing stage.

Use environment

Laser SLAM is mainly used indoors; visual SLAM can work both indoors and outdoors, but it has a high dependence on light and cannot work in dark places or some non-textured areas.

Map accuracy

Compared with the two, the map constructed by laser SLAM has high accuracy, there is no cumulative error, and it can be directly used for positioning and navigation.

By comparison, we find that laser SLAM and visual SLAM each excel. Compared with the two, the map constructed by laser SLAM has high accuracy, there is no cumulative error, and it can be directly used for positioning and navigation.

Of course, laser SLAM also has certain limitations. For example, in a long and straight corridor with walls on both sides or in a dynamically changing environment, relying solely on laser SLAM is prone to loss of positioning.

05

Based on laser SLAM navigation, it integrates laser reflector navigation, two-dimensional code navigation, inertial navigation, camera, etc., and adopts a multi-sensor fusion algorithm to make mobile robots more accurate in positioning, more powerful, and environmentally universal. Deal with promenades, complex environments with high dynamics, and ground potholes.

In the promenade and highly dynamic environment, you can freely switch to laser reflector navigation and two-dimensional code navigation to ensure that the positioning will not be lost. In the case of relatively poor ground conditions, you can choose a 3D camera to pit the ground, The three-dimensional obstacles are recognized and detected, and the mobile robot is selected to stop or bypass according to the parameter configuration.

Mobile robot positioning technology—laser SLAM

In addition, in order to ensure the safe use of mobile robots, Xianzhi Robot also has a series of methods to ensure the safety of mobile robots, people and goods during use. For example, the dual-laser solution completes 360 ° comprehensive safety detection around the mobile robot through the dual-laser; in the SRC-based laser SLAM automatic forklift solution, security protection is achieved through 3D cameras, infrared sensors, ultrasound, and safe touch, etc. Ensure the safety of people and goods during the operation of the automatic forklift.

Mobile robot positioning technology—laser SLAM

In fact, if you want mobile robots to cope with various complex use scenarios, laser SLAM and vision SLAM will inevitably develop in competition and fusion with each other. Multi-sensor fusion navigation is bound to be the future development direction. With the solution of the core technology of mobile robots, it will replace manual labor to complete simple, repetitive and laborious tasks, and truly serve humanity.

Comment

You must beto post a comment.