diff --git a/thesis/Main.pdf b/thesis/Main.pdf index 26fab8e..3dc29ae 100644 Binary files a/thesis/Main.pdf and b/thesis/Main.pdf differ diff --git a/thesis/Main.tex b/thesis/Main.tex index 2687d94..b769c70 100755 --- a/thesis/Main.tex +++ b/thesis/Main.tex @@ -1166,7 +1166,7 @@ To compare the computational efficiency of the two architectures we show the num -%\todo[inline]{rework table and calculate with actual scripts and network archs in deepsad codebase} +\todo[inline]{next paragrpah does not work anymroe?} As can be seen, the efficient encoder requires an order of magnitude fewer parameters and significantly fewer operations while maintaining a comparable representational capacity. The key reason is the use of depth–wise separable convolutions, aggressive pooling along the densely sampled horizontal axis, and a channel squeezing strategy before the fully connected layer. Interestingly, the Efficient network also processes more intermediate channels (up to 32 compared to only 8 in the LeNet variant), which increases its ability to capture a richer set of patterns despite the reduced computational cost. This combination of efficiency and representational power makes the Efficient encoder a more suitable backbone for our anomaly detection task. @@ -1378,7 +1378,7 @@ Pretraining runtimes for the autoencoders are reported in Table~\ref{tab:ae_pret \end{tabularx} \end{table} -The full DeepSAD training times are shown in Table~\ref{tab:train_runtimes_compact}, alongside the two classical baselines Isolation Forest and One-Class SVM. Here the contrast between methods is clear: while DeepSAD requires on the order of 15–20 minutes of GPU training per configuration, both baselines complete training in seconds on CPU. The OCSVM training can only be this fast due to the reduced input dimensionality from utilizing DeepSAD's pretraining encoder as a preprocessing step, although other dimensionality reduction methods may also be used which may require less computational resources for this step. +The full DeepSAD training times are shown in Table~\ref{tab:train_runtimes_compact}, alongside the two classical baselines Isolation Forest and One-Class SVM. Here the contrast between methods is clear: while DeepSAD requires on the order of 15–20 minutes of GPU training per configuration and fold, both baselines complete training in seconds on CPU. The OCSVM training can only be this fast due to the reduced input dimensionality from utilizing DeepSAD's pretraining encoder as a preprocessing step, although other dimensionality reduction methods may also be used which could require less computational resources for this step. \begin{table} \centering