abstract lidar capitalization
This commit is contained in:
BIN
thesis/Main.pdf
BIN
thesis/Main.pdf
Binary file not shown.
@@ -91,7 +91,8 @@
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
\DeclareRobustCommand{\rev}[1]{\textcolor{red}{#1}}
|
%\DeclareRobustCommand{\rev}[1]{\textcolor{red}{#1}}
|
||||||
|
\DeclareRobustCommand{\rev}[1]{#1}
|
||||||
\DeclareRobustCommand{\mcah}[1]{}
|
\DeclareRobustCommand{\mcah}[1]{}
|
||||||
|
|
||||||
% correct bad hyphenation
|
% correct bad hyphenation
|
||||||
@@ -763,7 +764,7 @@ We adapted the baseline implementations to our data loader and input format and
|
|||||||
|
|
||||||
\paragraph{Evaluation Metrics}
|
\paragraph{Evaluation Metrics}
|
||||||
|
|
||||||
As discussed in Section~\ref{sec:preprocessing}, evaluating model performance in our setup is challenging due to the absence of an analog ground truth. Instead, we rely on binary labels that are additionally noisy and subjective. All models under consideration produce continuous anomaly scores: DeepSAD outputs a positive-valued distance to the center of a hypersphere, Isolation Forest measures deviation from the mean tree depth (which can be negative), and OCSVM returns a signed distance to the decision boundary. Because these scores differ in scale and sign—and due to the lack of a reliable degradation threshold—it is not appropriate to evaluate performance using metrics such as accuracy or F1 score, both of which require classification at a fixed threshold.
|
As discussed in Section~\ref{sec:preprocessing}, evaluating model performance in our setup is challenging due to the absence of analog ground truth. Instead, we rely on binary labels that are additionally noisy and subjective. All models under consideration produce continuous anomaly scores: DeepSAD outputs a positive-valued distance to the center of a hypersphere, Isolation Forest measures deviation from the mean tree depth (which can be negative), and OCSVM returns a signed distance to the decision boundary. Because these scores differ in scale and sign—and due to the lack of a reliable degradation threshold—it is not appropriate to evaluate performance using metrics such as accuracy or F1 score, both of which require classification at a fixed threshold.
|
||||||
|
|
||||||
Instead, we adopt threshold-independent evaluation curves that illustrate model behavior across the full range of possible thresholds. The most commonly used of these is the Receiver Operating Characteristic (ROC)~\cite{roc} curve, along with its scalar summary metric, ROC AUC. ROC curves plot the true positive rate (TPR) against the false positive rate (FPR), providing insight into how well a model separates the two classes. However, as noted in~\cite{roc_vs_prc2,roc_vs_prc} and confirmed in our own testing, ROC AUC can be misleading under strong class imbalance—a common condition in anomaly detection.
|
Instead, we adopt threshold-independent evaluation curves that illustrate model behavior across the full range of possible thresholds. The most commonly used of these is the Receiver Operating Characteristic (ROC)~\cite{roc} curve, along with its scalar summary metric, ROC AUC. ROC curves plot the true positive rate (TPR) against the false positive rate (FPR), providing insight into how well a model separates the two classes. However, as noted in~\cite{roc_vs_prc2,roc_vs_prc} and confirmed in our own testing, ROC AUC can be misleading under strong class imbalance—a common condition in anomaly detection.
|
||||||
|
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
\addcontentsline{toc}{chapter}{Abstract}
|
\addcontentsline{toc}{chapter}{Abstract}
|
||||||
\begin{center}\Large\bfseries Abstract\end{center}\vspace*{1cm}\noindent
|
\begin{center}\Large\bfseries Abstract\end{center}\vspace*{1cm}\noindent
|
||||||
Autonomous robots are increasingly used in search and rescue (SAR) missions. In these missions, lidar sensors are often the most important source of environmental data. However, lidar data can degrade under hazardous conditions, especially when airborne particles such as smoke or dust are present. This degradation can lead to errors in mapping and navigation and may endanger both the robot and humans. Therefore, robots need a way to estimate the reliability of their lidar data, so \rev{that} they can make better-informed decisions.
|
Autonomous robots are increasingly used in search and rescue (SAR) missions. In these missions, LiDAR sensors are often the most important source of environmental data. However, LiDAR data can degrade under hazardous conditions, especially when airborne particles such as smoke or dust are present. This degradation can lead to errors in mapping and navigation and may endanger both the robot and humans. Therefore, robots need a way to estimate the reliability of their LiDAR data, so \rev{that} they can make better-informed decisions.
|
||||||
\bigskip
|
\bigskip
|
||||||
|
|
||||||
This thesis investigates whether anomaly detection methods can be used to quantify lidar data degradation \rev{caused by airborne particles such as smoke and dust}. We apply a semi-supervised deep learning approach called DeepSAD, which produces an anomaly score for each lidar scan, serving as a measure of data reliability.
|
This thesis investigates whether anomaly detection methods can be used to quantify LiDAR data degradation \rev{caused by airborne particles such as smoke and dust}. We apply a semi-supervised deep learning approach called DeepSAD, which produces an anomaly score for each LiDAR scan, serving as a measure of data reliability.
|
||||||
\bigskip
|
\bigskip
|
||||||
|
|
||||||
We evaluate this method against baseline methods on a subterranean dataset that includes lidar scans degraded by artificial smoke. Our results show that DeepSAD consistently outperforms the baselines and can clearly distinguish degraded from normal scans. At the same time, we find that the limited availability of labeled data and the lack of robust ground truth remain major challenges. Despite these limitations, our work demonstrates that anomaly detection methods are a promising tool for lidar degradation quantification in SAR scenarios.
|
We evaluate this method against baseline methods on a subterranean dataset that includes LiDAR scans degraded by artificial smoke. Our results show that DeepSAD consistently outperforms the baselines and can clearly distinguish degraded from normal scans. At the same time, we find that the limited availability of labeled data and the lack of robust ground truth remain major challenges. Despite these limitations, our work demonstrates that anomaly detection methods are a promising tool for LiDAR degradation quantification in SAR scenarios.
|
||||||
|
|||||||
Reference in New Issue
Block a user