structure of experimental setup chapter done
This commit is contained in:
@@ -898,19 +898,79 @@ By evaluating and comparing both approaches, we hope to demonstrate a more thoro
|
|||||||
{codebase, hardware description overview of training setup, details of deepsad setup}
|
{codebase, hardware description overview of training setup, details of deepsad setup}
|
||||||
{overview of chapter given $\rightarrow$ give sequential setup overview}
|
{overview of chapter given $\rightarrow$ give sequential setup overview}
|
||||||
|
|
||||||
\todo[inline]{codebase}
|
%\todo[inline]{codebase}
|
||||||
|
|
||||||
\newsection{setup_overview}{General Description}
|
\newsection{setup_overview}{Experimental Setup Overview}
|
||||||
\todo[inline]{starting from deepsad codebase}
|
|
||||||
\todo[inline]{data preprocessed (2d projections, normalized range)}
|
%\todo[inline]{starting from deepsad codebase}
|
||||||
\todo[inline]{k-fold data loading, training, testing}
|
\threadtodo
|
||||||
\todo[inline]{deepsad + baselines = isoforest, ocsvm (deepsad ae, dim reduction)}
|
{Explain deepsad codebase as starting point}
|
||||||
\todo[inline]{roc, prc, inference}
|
{what is the starting point?}
|
||||||
|
{codebase, github, dataloading, training, testing, baselines}
|
||||||
|
{codebase understood $\rightarrow$ how was it adapted}
|
||||||
|
|
||||||
|
%\todo[inline]{data preprocessed (2d projections, normalized range)}
|
||||||
|
\threadtodo
|
||||||
|
{explain how dataloading was adapted}
|
||||||
|
{loading data first point step to new training}
|
||||||
|
{preprocessed numpy (script), load, labels/meta, split, k-fold}
|
||||||
|
{k-fold $\rightarrow$ also adapted in training/testing}
|
||||||
|
|
||||||
|
%\todo[inline]{k-fold data loading, training, testing}
|
||||||
|
\threadtodo
|
||||||
|
{how was training/testing adapted (networks overview), inference, ae tuning}
|
||||||
|
{data has been loaded, how is it processed}
|
||||||
|
{networks defined, training/testing k-fold, more metrics, inference + ae tuning implemented}
|
||||||
|
{training procesure known $\rightarrow$ what methods were evaluated}
|
||||||
|
%\todo[inline]{deepsad + baselines = isoforest, ocsvm (deepsad ae, dim reduction)}
|
||||||
|
\threadtodo
|
||||||
|
{what methods were evaluated}
|
||||||
|
{we know what testing/training was implemented for deepsad, but what is it compared to}
|
||||||
|
{isoforest, ocsvm adapted, for ocsvm only dim reduced feasible (ae from deepsad)}
|
||||||
|
{compared methods known $\rightarrow$ what methods were used}
|
||||||
|
%\todo[inline]{roc, prc, inference}
|
||||||
|
\threadtodo
|
||||||
|
{what evaluation methods were used}
|
||||||
|
{we know what is compared but want to know exactly how}
|
||||||
|
{explain roc, prc, inference with experiment left out of training}
|
||||||
|
{experiment overview given $\rightarrow$ details to deepsad during training?}
|
||||||
|
|
||||||
|
|
||||||
\newsection{setup_deepsad}{DeepSAD Description}
|
\newsection{setup_deepsad}{DeepSAD Experimental Setup}
|
||||||
\todo[inline]{architectures, visualization, receptive field (explanation, images, x/y resolution)}
|
\threadtodo
|
||||||
\todo[inline]{hyperparameters, LR, eta, epochs, latent space size (hyper param search), semi labels}
|
{custom arch necessary, first lenet then second arch to evaluate importance of arch}
|
||||||
|
{training process understood, but what networks were actually trained}
|
||||||
|
{custom arch, lenet from paper and simple, receptive field problem, arch really important?}
|
||||||
|
{motivation behind archs given $\rightarrow$ what do they look like}
|
||||||
|
%\todo[inline]{architectures, visualization, receptive field (explanation, images, x/y resolution)}
|
||||||
|
\threadtodo
|
||||||
|
{show and explain both archs}
|
||||||
|
{we know why we need them but what do they look like}
|
||||||
|
{visualization of archs, explain LeNet and why other arch was chosen that way}
|
||||||
|
{both archs known $\rightarrow$ what about the other inputs/hyperparameters}
|
||||||
|
%\todo[inline]{hyperparameters, LR, eta, epochs, latent space size (hyper param search), semi labels}
|
||||||
|
\threadtodo
|
||||||
|
{give overview of hyperparameters}
|
||||||
|
{deepsad arch known, other hyperparameters?}
|
||||||
|
{LR, eta, epochs, latent space size (hyper param search), semi labels}
|
||||||
|
{everything that goes into training known $\rightarrow$ what experiments were actually done?}
|
||||||
|
|
||||||
|
\newsection{setup_matrix}{Experiment Matrix}
|
||||||
|
|
||||||
|
%\todo[inline]{what experiments were performed and why (table/list containing experiments)}
|
||||||
|
\threadtodo
|
||||||
|
{give overview of experiments and their motivations}
|
||||||
|
{training setup clear, but not what was trained/tested}
|
||||||
|
{explanation of what was searched for (ae latent space first), other hyperparams and why}
|
||||||
|
{all experiments known $\rightarrow$ how long do they take to train}
|
||||||
|
|
||||||
|
\newsection{setup_hardware}{Experiment Hardware and Runtimes}
|
||||||
|
|
||||||
|
\threadtodo
|
||||||
|
{give overview about hardware setup and how long things take to train}
|
||||||
|
{we know what we trained but not how long that takes}
|
||||||
|
{table of hardware and of how long different trainings took}
|
||||||
|
{experiment setup understood $\rightarrow$ what were the experiments' results}
|
||||||
|
|
||||||
% \newsection{autoencoder_architecture}{Deep SAD Autoencoder Architecture}
|
% \newsection{autoencoder_architecture}{Deep SAD Autoencoder Architecture}
|
||||||
% \newsection{data_setup}{Training/Evaluation Data Distribution}
|
% \newsection{data_setup}{Training/Evaluation Data Distribution}
|
||||||
|
|||||||
Reference in New Issue
Block a user