deepsad flow diagram
This commit is contained in:
@@ -518,7 +518,7 @@ The pre-training results are used in two more key ways. First, the encoder weigh
|
||||
|
||||
In the main training step, DeepSAD's network is trained using SGD backpropagation. The unlabeled training data is used with the goal to minimize an data-encompassing hypersphere. Since one of the pre-conditions of training was the significant prevelance of normal data over anomalies in the training set, normal samples collectively cluster more tightly around the centroid, while the rarer anomalous samples do not contribute as significantly to the optimization, resulting in them staying further from the hypersphere center. The labeled data includes binary class labels signifying their status as either normal or anomalous samples. Labeled anomalies are pushed away from the center by defining their optimization target as maximizing the distance between them and $\mathbf{c}$. Labeled normal samples are treated similar to unlabeled samples with the difference that DeepSAD includes a hyperparameter capable of controling the proportion with which labeled and unlabeled data contribute to the overall optimization. The resulting network has learned to map normal data samples closer to $\mathbf{c}$ in the latent space and anomalies further away.
|
||||
|
||||
\todo[inline]{maybe pseudocode algorithm block?}
|
||||
\fig{deepsad_procedure}{diagrams/deepsad_procedure}{WIP: Depiction of DeepSAD's training procedure, including data flows and tweakable hyperparameters.}
|
||||
|
||||
\threadtodo
|
||||
{how to use the trained network?}
|
||||
|
||||
Reference in New Issue
Block a user