bg lidar section

This commit is contained in:
Jan Kowalczyk
2025-05-07 10:56:37 +02:00
parent 275a2ebac2
commit caa2edcef5

View File

@@ -421,13 +421,13 @@ Autoencoders have been shown to be useful in the anomaly detection domain by ass
{explain how radar/lidar works, usecases, output = pointclouds, what errors}
{rain degradation paper used deepsad $\rightarrow$ explained in detail in next chapter}
LiDAR (Light Detection and Ranging) measures distance by emitting short laser pulses and timing how long they take to return—a working principle which may be familiar from the the more commonly known radar technology, which uses radio-frequency pulses and measures their return time to gauge an object's range. Unlike radar, however, LiDAR operates at much shorter wavelengths and can fire millions of pulses per second, achieving millimeter-level precision and dense, high-resolution 3D point clouds. This fine granularity makes LiDAR ideal for applications such as detailed obstacle mapping, surface reconstruction, and autonomous navigation in complex environments.
LiDAR (Light Detection and Ranging) measures distance by emitting short laser pulses and timing how long they take to return, an approach many may be familiar with from the more commonly known radar technology, which uses radio-frequency pulses and measures their return time to gauge an object's range. Unlike radar, however, LiDAR operates at much shorter wavelengths and can fire millions of pulses per second, achieving millimeter-level precision and dense, high-resolution 3D point clouds. This fine granularity makes LiDAR ideal for applications such as detailed obstacle mapping, surface reconstruction, and autonomous navigation in complex environments.
A LiDAR sensor emits a laser pulse in a specific direction, then waits for the tiny flash of returned light. Because the speed of light in air is effectively constant, multiplying half the roundtrip time by that speed gives the distance to the reflecting surface. Modern spinning multibeam LiDAR systems emit millions of these pulses every second. Each pulse is sent at a known combination of horizontal and vertical angles, creating a regular grid of measurements: for example, 32 vertical channels swept through 360° horizontally at a fixed angular spacing. While newer solid-state designs (flash, MEMS, phased-array) are emerging, spinning multi-beam LiDAR remains the most commonly seen type in autonomous vehicles and robotics because of its proven range, reliability, and mature manufacturing base.
Because the speed of light in air is effectively constant, multiplying half the roundtrip time by that speed gives the distance between the lidar sensor and the reflecting object, as can be seen visualized in figure~\ref{fig:lidar_working_principle}. Modern spinning multibeam LiDAR systems emit millions of these pulses every second. Each pulse is sent at a known combination of horizontal and vertical angles, creating a regular grid of measurements: for example, 32 vertical channels swept through 360° horizontally at a fixed angular spacing. While newer solid-state designs (flash, MEMS, phased-array) are emerging, spinning multi-beam LiDAR remains the most commonly seen type in autonomous vehicles and robotics because of its proven range, reliability, and mature manufacturing base.
\fig{lidar_working_principle}{figures/bg_lidar_principle_placeholder.png}{PLACEHOLDER - An illustration of lidar sensors' working principle.}
Every time a pulse returns, the LiDAR records its direction (based on the angles at emission) and its range, producing a single three-dimensional point. By collecting millions of such points each second, the sensor constructs a “point cloud”—a dense set of 3D coordinates relative to the LiDARs own position. In addition to X, Y, and Z, many LiDARs also record the intensity or reflectivity of each return, providing extra information about the surface properties of the object hit by the pulse.
Each instance a lidar emits and receives a laser pulse, it can use the ray's direction and the calculated distance to produce a single three-dimensional point. By collecting millions of such points each second, the sensor constructs a “point cloud”—a dense set of 3D coordinates relative to the LiDARs own position. In addition to X, Y, and Z, many LiDARs also record the intensity or reflectivity of each return, providing extra information about the surface properties of the object hit by the pulse.
%LiDARs high accuracy, long range, and full-circle field of view make it indispensable for tasks like obstacle detection, simultaneous localization and mapping (SLAM), and terrain modeling in autonomous driving and mobile robotics. While vehicles and robots often carry complementary sensors like time-of-flight cameras, ultrasonic sensors and RGB cameras, LiDAR outperforms them when it comes to precise 3D measurements over medium to long distances, operates reliably in varying light conditions, and delivers the spatial density needed for safe navigation. However, intrinsic sensor noise remains: range quantization can introduce discrete jumps in measured distance, angle jitter can blur fine features, and multireturn ambiguities may arise when a single pulse generates several echoes (for example, from foliage or layered surfaces). Environmental factors further degrade data quality: specular reflections and multipath echoes can produce ghost points, beam occlusion by intermediate objects leads to missing measurements, and atmospheric scattering in rain, fog, snow, dust, or smoke causes early returns and spurious points.
@@ -435,9 +435,15 @@ Every time a pulse returns, the LiDAR records its direction (based on the angles
LiDARs high accuracy, long range, and full-circle field of view make it indispensable for tasks like obstacle detection, simultaneous localization and mapping (SLAM), and terrain modeling in autonomous driving and mobile robotics. While complementary sensors—such as time-of-flight cameras, ultrasonic sensors, and RGB cameras—have their strengths at short range or in particular lighting, only LiDAR delivers the combination of precise 3D measurements over medium to long distances, consistent performance regardless of illumination, and the point-cloud density needed for safe navigation. LiDAR systems do exhibit intrinsic noise (e.g., range quantization or occasional multi-return ambiguities), but in most robotic applications these effects are minor compared to environmental degradation.
In subterranean and disaster scenarios—collapsed tunnels, mine shafts, or earthquake-damaged structures—the dominant challenge is airborne particles: dust kicked up by debris or smoke from fires. These aerosols create early returns that can mask real obstacles and cause missing data behind particle clouds, undermining SLAM and perception algorithms designed for cleaner data. Our work focuses on quantifying the degree of LiDAR degradation so that mapping and decision-making processes can adapt dynamically to the true quality of the sensor input.
In subterranean and rescue domain scenarios, the dominant challenge is airborne particles: dust kicked up by debris or smoke from fires. These aerosols create early returns that can mask real obstacles and cause missing data behind particle clouds, undermining SLAM and perception algorithms designed for cleaner data. This degradation is a type of atmospheric scattering, which can be caused by any kind of airborne particulates (e.g., snowflakes) or liquids (e.g., water droplets). Other kinds of environmental noise exist as well, such as specular reflections caused by smooth surfaces, beam occlusion due to close objects blocking the sensor's field of view or even thermal drift-temperature affecting the sensor's circuits and mechanics, introducing biases in the measurements.
\todo[inline]{related work, survey on lidar denoising, noise removal in subt - quantifying same as us in rain, also used deepsad - transition}
All of these may create unwanted noise in the point cloud created by the lidar, making this domain an important research topic. \cite{lidar_denoising_survey} gives an overview about the current state of research into denoising methods for lidar in adverse environments, categorizes them according to their approach (distance-, intensity- or learning-based) and concludes that all approaches have merits but also open challenges to solve, for autonomous systems to safely navigate these adverse environments. The current research is heavily focused on the automotive domain, which can be observed by the vastly higher number of methods filtering noise from adverse weather effects-environmental scattering from rain, snow and fog-than from dust, smoke or other particles occuring rarely in the automotive domain.
A learning-based method to filter dust-caused degradation from lidar is introduced in~\cite{lidar_denoising_dust}. The authors employ a convultional neural network to classify dust particles in lidar point clouds as such, enabling the filtering of those points and compare their methods to more conservative approaches, such as various outlier removal algorithms. Another relevant example would be the filtering method proposed in~\cite{lidar_subt_dust_removal}, which enables the filtration of pointclouds degraded by smoke or dust in subterranean environments, with a focus on the search and rescue domain. To achieve this, they formulated a filtration framework that relies on dynamic onboard statistical cluster outlier removal, to classify and remove dust particles in point clouds.
Our method does not aim to remove the noise or degraded points in the lidar data, but quantify its degradation to inform other systems of the autonomous robot about the data's quality, enabling more informed decisions. One such approach, though from the autonomous driving and not from the search and rescue domain can be found in~\cite{degradation_quantification_rain}. A learning-based method to quantify the lidar's sensor data degradation caused by adverse weather-effects was proposed, implemented by posing the problem as an anomaly detection task and utilizing DeepSAD to learn degraded data to be an anomaly and high quality data to be normal behaviour. DeepSAD's anomaly score was used as the degradation's quantification score. From this example we decided to imitate this method and adapt it for the search and rescue domain, although this proved challenging due to the more limited data availability. Since it proved successfull for~\cite{degradation_quantification_rain} we also employed DeepSAD, whose detailed workings we present in the following chapter.
%\todo[inline]{related work, survey on lidar denoising, noise removal in subt - quantifying same as us in rain, also used deepsad - transition}
%\todo[inline]{related work in lidar}
%\todo[inline, color=green!40]{the older more commonly known radar works by sending out an electromagnetic wave in the radiofrequency and detecting the time it takes to return (if it returns at all) signalling a reflective object in the path of the radiowave. lidar works on the same principle but sends out a lightray produced by a laser (citation needed) and measuring the time it takes for the ray to return to the sensor. since the speed of light is constant in air the system can calculate the distance between the sensor and the measured point. modern lidar systems send out multiple, often millions of measurement rays per second which results in a three dimensional point cloud, constructed from the information in which direction the ray was cast and the distance that was measured}