better formating
This commit is contained in:
parent
5aa4f1169a
commit
bd43a83046
4 changed files with 220 additions and 10 deletions
|
|
@ -1,5 +1,7 @@
|
|||
|
||||
\documentclass[conference]{IEEEconf}
|
||||
|
||||
|
||||
%\input epsf
|
||||
\usepackage{graphicx}
|
||||
\usepackage{multirow}
|
||||
|
|
@ -66,7 +68,7 @@ We present in this paper a novel time series, one-shot classifier called \gls{ma
|
|||
% for peerreview papers, inserts a page break and creates the second title.
|
||||
% Will be ignored for other modes.
|
||||
\IEEEpeerreviewmaketitle
|
||||
|
||||
\agd{reset acronyms}
|
||||
|
||||
\section{Introduction}
|
||||
|
||||
|
|
@ -228,7 +230,7 @@ ID(P_j,P_l) = \min_{i\in[0,N_l-N_j]} nd(P_j,P_l[i:i+N_j])
|
|||
\end{equation}
|
||||
represents the smallest distance between $P_j$ and any substring of length $N_j$ from $P_l$ --- with $N_l>N_j$.
|
||||
If $N_l<N_j$, then $ID(P_j,P_l) = ID(P_l,P_j)$.
|
||||
In other words, when computing the inter-distance between two patterns, we slide the short pattern along the length of the long one and compute the normalized distance at every position to finally consider only the smallest of these distances as the inter-distance.
|
||||
When computing the inter-distance between two patterns, we slide the short pattern along the length of the long one and compute the normalized distance at every position to finally consider only the smallest of these distances as the inter-distance.
|
||||
|
||||
To fully define the threshold $T_j$, we introduce the shrinkage coefficient $\alpha$.
|
||||
This coefficient, multiplied with the smallest inter-distance $ID_j$, forms the threshold $T_j$.
|
||||
|
|
@ -258,7 +260,7 @@ The algorithm for \gls{mad} follows three steps:
|
|||
|
||||
However, directly implementing this suite of operations is not optimal as it requires computing the distance from any substring to any pattern multiple times --- exactly once per sample in the substring.
|
||||
A more efficient solution considers each substring only once.
|
||||
In other words, iterating over the patterns rather than the samples is more efficient as it replaces distance computations with comparison operations.
|
||||
Iterating over the patterns rather than the samples is more efficient as it replaces distance computations with comparison operations.
|
||||
The efficient implementation follows the operations:
|
||||
|
||||
\begin{enumerate}
|
||||
|
|
@ -353,8 +355,8 @@ Finally, the third part uses the sames loops as the second and also terminates.
|
|||
Overall, \gls{mad} always terminates for any finite time series and finite set of finite patterns.
|
||||
|
||||
\textbf{Monotony of number of unknown sample}\agd{find better title}
|
||||
Explain that the number of unknown sample is monotonic as a function of alpha.
|
||||
Also, a sample that is classified as unknown will always remain unknown if alpha decreases.
|
||||
\agd{Explain that the number of unknown sample is monotonic as a function of alpha.
|
||||
Also, a sample that is classified as unknown will always remain unknown if alpha decreases.}
|
||||
|
||||
\section{Evaluation}
|
||||
The evaluation of \gls{mad} consists in the detection of the states for time series from various machines.
|
||||
|
|
@ -452,7 +454,8 @@ The activity in this second time series was very sparse with long periods withou
|
|||
The no consumption sections: are not a challenging --- i.e., all detectors perform well on this type of pattern ---, make the manual labeling more difficult, and level all results up.
|
||||
For this reason we removed large sections of inactivity between active segments to make the time series more challenging without tempering with the order of detector performances.
|
||||
|
||||
\input{refit_table}
|
||||
%\input{refit_table}
|
||||
\agd{include table about refit dataset}
|
||||
|
||||
\subsection{Alternative Methods}
|
||||
\agd{Explain better why the alternative methods are chosen.}
|
||||
|
|
@ -621,7 +624,7 @@ Built as a variation of a traditional \gls{1nn}, \gls{mad} uses a dynamic window
|
|||
One hyper-parameter, $\alpha$, controls the confidence of the detector and the trade-off between un-classified and miss-classified samples.
|
||||
The comparison to traditional state detection methods highlights the potential of \gls{mad} for the pre-processing of raw data for security applications.
|
||||
|
||||
\bibliographystyle{splncs04}
|
||||
\bibliographystyle{plain}
|
||||
\bibliography{biblio}
|
||||
|
||||
\end{document}
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue