776 lines
56 KiB
TeX
776 lines
56 KiB
TeX
\documentclass[conference]{IEEEconf}
|
|
|
|
|
|
%\input epsf
|
|
\usepackage{graphicx}
|
|
\usepackage{multirow}
|
|
\usepackage{xcolor}
|
|
\usepackage{booktabs}
|
|
\usepackage{tabularx}
|
|
\usepackage{algpseudocodex}
|
|
\usepackage{algorithm}
|
|
\usepackage{amsfonts}
|
|
\usepackage{amssymb}
|
|
\usepackage{amsthm}
|
|
\usepackage[toc,acronym,abbreviations,nonumberlist,nogroupskip]{glossaries-extra}
|
|
|
|
\renewcommand\thesection{\arabic{section}} % arabic numerals for the sections
|
|
\renewcommand\thesubsectiondis{\thesection.\arabic{subsection}.}% arabic numerals for the subsections
|
|
\renewcommand\thesubsubsectiondis{\thesubsectiondis.\arabic{subsubsection}.}% arabic numerals for the subsubsections
|
|
|
|
\newtheorem{problem-statement}{Problem Statement}
|
|
|
|
\newcommand\agd[1]{{\color{red}$\bigstar$}\footnote{agd: #1}}
|
|
\newcommand\SF[1]{{\color{blue}$\bigstar$}\footnote{sf: #1}}
|
|
\newcommand{\cn}{{\color{purple}[citation needed]}}
|
|
\newcommand{\pv}{{\color{orange}[passive voice]}}
|
|
\newcommand{\wv}{{\color{orange}[weak verb]}}
|
|
|
|
|
|
% correct bad hyphenation here
|
|
\hyphenation{op-tical net-works semi-conduc-tor IEEEconf hyper-parameter una-voidably li-te-ra-ture exam-ple thre-sholds tra-di-tio-nal-ly ge-ne-ra-ted}
|
|
\begin{document}
|
|
\input{acronyms}
|
|
\title{\textbf{\Large MAD: One-Shot Machine Activity Detector for Physics-Based Cyber Security\\}}
|
|
%\author{
|
|
% Arthur Grisel-Davy$^{1,*}$, Sebastian Fischmeister$^{1}$\\
|
|
% \normalsize $^{1}$University of Waterloo, Ontario, Canada\\
|
|
% \normalsize agriseld@uwaterloo.ca, sfishme@uwaterloo.ca\\
|
|
% \normalsize *corresponding author
|
|
%}
|
|
\author{
|
|
\vspace{\baselineskip}
|
|
\vspace{1.1\baselineskip}
|
|
}
|
|
%+++++++++++++++++++++++++++++++++++++++++++
|
|
|
|
% use only for invited papers
|
|
%\specialpapernotice{(Invited Paper)}
|
|
|
|
% make the title area
|
|
\maketitle
|
|
\begin{abstract}
|
|
Side channel analysis offers several advantages over traditional machine monitoring methods.
|
|
The low intrusiveness, independence with the host, data reliability and difficulty to bypass are compelling arguments for using involuntary emissions as input for enforcing security policies.
|
|
However, side-channel information often comes in the form of unlabeled time series of a proxy variable of the activity.
|
|
Enabling the definition and enforcement of high-level security policies requires extracting the state or activity of the system from the input data.
|
|
We present in this paper a novel time series, one-shot pattern locator and classifier called Machine Activity Detector (MAD) specifically designed and evaluated for side-channel analysis.
|
|
We evaluate MAD in two case studies on a variety of machines and datasets where it outperforms other traditional state detection solutions and presents formidable performances for security rules enforcement.
|
|
Results of state detection with MAD enable the definition and verification of high-level security rules to detect various attacks without any interaction with the monitored machine.
|
|
\end{abstract}
|
|
%\IEEEoverridecommandlockouts
|
|
%\vspace{1.5ex}
|
|
\begin{keywords}
|
|
\itshape Side-Channel Analysis; Intrusion Detection.
|
|
\end{keywords}
|
|
% no keywords
|
|
|
|
% For peer review papers, you can put extra information on the cover
|
|
% page as needed:
|
|
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
|
|
%
|
|
% for peerreview papers, inserts a page break and creates the second title.
|
|
% Will be ignored for other modes.
|
|
\IEEEpeerreviewmaketitle
|
|
|
|
\section{Introduction}
|
|
|
|
\gls{ids}s leverage different types of data to detect intrusions.
|
|
On one side, most solutions use labeled and actionable data, often provided by the system to protect.
|
|
This data can be the resource usage \cite{1702202}, program source code \cite{9491765} or network traffic \cite{10.1145/2940343.2940348} leveraged by an \gls{hids} or \gls{nids}.
|
|
On the other side, some methods consider only information that the system did not intentionally provide.
|
|
The system emits these activity by-products through physical mediums called side channels.
|
|
Common side-channel information for an embedded system includes power consumption \cite{yang2016power} or electromagnetic fields \cite{chawla2021machine}.
|
|
|
|
Side-channel information offers compelling advantages over agent-collected information.
|
|
First, the information is difficult to forge.
|
|
Because the monitored system is not involved in the data retrieval process, there is no risk that an attacker that compromised the system could easily send forged information.
|
|
For example, if an attacker performs any computation on the system, it will unavoidably affect a variety of different side channels.
|
|
There are studies focusing on altering the power consumption profile of software, but their goal is to mask the consumption pattern to avoid leaking side-channel information.
|
|
These solutions \cite{1253591,6918465} do not offer to change the pattern to an arbitrary target but to make all activities indistinguishable.
|
|
These methods still induce changes in the consumption pattern that makes them identifiable by the detection system.
|
|
Second, the side-channel information retrieval process is often non-intrusive and non-disruptive for the monitored system.
|
|
Measuring the power consumption of a computer does not involve the cooperation or modification of the system \cite{10.1145/2976749.2978353}.
|
|
This host independence property is crucial for safety-critical or high-availability applications as the failure of one of the two --- monitored or monitoring --- systems does not affect the other.
|
|
These two properties --- reliable data and host independence --- set physics-based monitoring solutions apart with distinct advantages and use cases.
|
|
|
|
It is interesting to notice that leveraging side-channel analysis to detect malfunction is not limited to software.
|
|
For production machines with high availability requirements, many side-channels provide useful information about the state of the machine.
|
|
Common sources of information are vibrations \cite{zhang2019numerical}, the chemical composition of various fluids \cite{4393062}, the shape of a gear \cite{wang2015measurement} or performance metrics like the throughput of a pump \cite{gupta2021novel}.
|
|
This is important to keep in mind that other domains outside of software can also benefit from side-channel analysis tools tailored for security enforcement.
|
|
|
|
However, using side-channel data introduces new challenges.
|
|
One obstacle to overcome when designing a physics-based solution is the interpretation of the data.
|
|
Because the data collection consists of measuring a physical phenomenon, the input data is often a discrete time series.
|
|
The values in these time series are not directly actionable.
|
|
In some cases, a threshold value is enough to assess the integrity of the system.
|
|
In such a case, comparing each value of the time series to the threshold is possible \cite{jelali2013statistical}.
|
|
However, whenever a simple threshold is not a reliable factor for the decision, a more advanced analysis of the time series is required to make it actionable.
|
|
The state of a machine is often represented by a specific pattern.
|
|
This pattern could be, for example, a succession of specific amplitudes or a frequency/average pair for periodic processes.
|
|
These patterns are impossible to reliably detect with a simple threshold method.
|
|
Identifying the occurrence and position of these patterns makes the data actionable and enables higher-level --- i.e., that work at a higher level of abstraction \cite{tongaonkar2007inferring} --- security and monitoring policies.
|
|
For example, a computer starting at night or rebooting multiple times in a row should raise an alert for a possible intrusion or malfunction.
|
|
|
|
Rule-based \gls{ids}s using side-channel information require an accurate and practical pattern detection solution.
|
|
Many data-mining algorithms assume that training data is cheap, meaning that acquiring large --- labeled --- datasets is achievable without significant expense.
|
|
Unfortunately, collecting labeled data requires following a procedure and induces downtime for the machine, which can be expensive.
|
|
Collecting many training samples during normal operations of the machine is more time-consuming as the machine's activity cannot be controlled.
|
|
A more convenient data requirement would be a single sample of each pattern to detect.
|
|
Collecting a sample is immediately possible after the installation of the measurement equipment during normal operations of the machine.
|
|
|
|
This paper presents \gls{mad}, a distance-based, one-shot pattern detection method for time series.
|
|
\gls{mad} focuses on providing pre-defined state detection from only one training sample per class.
|
|
This approach enables the analysis of side-channel information in contexts where the collection of large datasets is impractical.
|
|
A window selection algorithm lies at the core of \gls{mad} and yields a stable classification of individual samples, essential for the robustness of high-level security rules.
|
|
In experiments, \gls{mad} outperforms other approaches in accuracy and the reduced Levenshtein distance on various simulated, lab-captured, and public times-series datasets.
|
|
|
|
We will present the current related work on physics-based security and time series pattern detection in Section~\ref{sec:related}.
|
|
Then we will introduce the formal and practical definitions of the solution in Section~\ref{sec:statement} and~\ref{sec:solution}.
|
|
The two case studies presented in section~\ref{sec:cs1} and~\ref{sec:cs2} illustrate the performances of the solution in various situations.
|
|
Finally, we will discuss some important aspects of the proposed solution in Section~\ref{sec:discussion}.
|
|
|
|
\section{Related Work}\label{sec:related}
|
|
Side-channel analysis focuses on extracting information from the involuntary emissions of a system.
|
|
This topic traces back to the seminal work of Paul C. Kocher.
|
|
He introduced power side-channel analysis to extract secrets from several cryptographic protocols \cite{kocher1996timing}.
|
|
This led to the new field of side-channel analysis \cite{randolph2020power}.
|
|
However, the potential of leveraging side-channel information for defense and security purposes remains mostly untapped.
|
|
The information leakage through involuntary emissions through different channels provides insights into the activities of a machine.
|
|
Acoustic emissions \cite{belikovetsky2018digital}, heat pattern signature \cite{al2016forensics} or power consumption \cite{10.1145/3571288, gatlin2019detecting, CHOU2014400}, can --- among other side-channels --- reveal information about a machine's activity.
|
|
Side-channel information collection generally results in time series objects to analyze.
|
|
|
|
There exists a variety of methods for analyzing time series.
|
|
For signature-based solutions, a specific extract of the data is compared to known-good references to assess the integrity of the host \cite{9934955, hidden-articlemlcs}.
|
|
This signature comparison enables the verification of expected and specific sections and requires that the sections of interest can be extracted and synchronized.
|
|
Another solution for detecting intrusions is the definition of security policies.
|
|
Security policies are sets of rules that describe wanted or unwanted behavior.
|
|
These rules are built on input data accessible to the \gls{ids} such as user activity \cite{ilgun1995state} or network traffic \cite{5563714, kumar2020integrated}.
|
|
However, the input data requirements must have labels to apply a rule.
|
|
This illustrates the gap between the side-channel analysis methods and the rule-based intrusion detection methods.
|
|
To apply security policies to side-channel information, it is necessary to first label the data.
|
|
|
|
The problem of identifying pre-defined patterns in unlabeled time series is referenced under various names in the literature.
|
|
The terms \textit{activity segmentation} or \textit{activity detection} are the most relevant for the problem we are interested in.
|
|
The state-of-the-art methods in this domain focus on human activities and leverage various sensors such as smartphones \cite{wannenburg2016physical}, cameras \cite{bodor2003vision} or wearable sensors \cite{uddin2018activity}.
|
|
These methods rely on large labeled datasets to train classification models and detect activities \cite{micucci2017unimib}.
|
|
For real-life applications, access to large labeled datasets may not be possible.
|
|
Another approach, more general than activity detection, uses \gls{cpd}.
|
|
\gls{cpd} is a sub-topic of time series analysis that focuses on detecting abrupt changes in a time series \cite{truong2020selective}.
|
|
It is assumed in many cases that these change points are representative of state transitions from the observed system.
|
|
However, \gls{cpd} is only the first step in state detection as classification of the detected segments remains necessary \cite{aminikhanghahi2017survey}.
|
|
Moreover, not all state transitions trigger abrupt changes in time series statistics, and some states include abrupt changes.
|
|
Overall, \gls{cpd} only fits a specific type of problem with stable states and abrupt transitions.
|
|
Neural networks raised in popularity for time series analysis with \gls{rnn}.
|
|
Large \gls{cnn} can perform pattern extraction in long time series, for example, in the context of \gls{nilm} \cite{8598355}.
|
|
\gls{nilm} focuses on the problem of signal disaggregation.
|
|
In this problem, the signal comprises an aggregate of multiple signals, each with their own patterns \cite{angelis2022nilm}.
|
|
This problem shares many terms and core techniques as this paper but the nature of the input data makes \gls{nilm} a distinct area of research.
|
|
|
|
The specific problem of classification with only one example of each class is called one-shot --- or few-shot --- classification.
|
|
This topic focuses on pre-extracted time series classification with few training samples, often using multi-level neural networks \cite{10.1145/3371158.3371162, 9647357}.
|
|
However, in the context of side-channel analysis, a time series contains many patterns that are not extracted.
|
|
Moreover, neural-based approaches lack interpretability, which can cause issues in the case of unforeseen time series patterns.
|
|
Simpler approaches with novelty detection capabilities are required when the output serves as input for rule-based processing.
|
|
|
|
Finally, Duin et. al. investigate the problem of distance-based few-shot classification \cite{duin1997experiments}.
|
|
They present an approach based on the similarity between new objects and a dissimilarity matrix between items of the training set.
|
|
The similarities are evaluated with Nearest-Neighbor rules or \gls{svm}.
|
|
Their approach bears some interesting similarities with the one presented in this paper.
|
|
However, they evaluate their work on the recognition of handwritten numerals, which is far from the use case we are interested in.
|
|
|
|
\section{Problem Statement}\label{sec:statement}
|
|
%\gls{mad} focuses on detecting the state of a time series at any point in time.
|
|
We consider the problem from the point of view of a multi-class, mono-label classification problem \cite{aly2005survey} for every sample in a time series.
|
|
The problem is multi-class because multiple states can occur in one-time series, and therefore any sample is assigned one of multiple states.
|
|
The problem is mono-label because only one state is assigned to each sample.
|
|
The classification is a mapping from the sample space to the state space.
|
|
|
|
\begin{problem-statement}[\gls{mad}]
|
|
Given a discretized time series $t$ and a set of patterns $P=\{P_1,\dots, P_n\}$, identify a mapping $m:\mathbb{N}\longrightarrow P\cup \lambda$ such that every sample $t[i]$
|
|
maps to a pattern in $P\cup \lambda$ with the condition that the sample matches an occurrence of the pattern in $t$.
|
|
\end{problem-statement}
|
|
|
|
The time series $t: \mathbb{N} \longrightarrow \mathbb{R}$ is a finite, discretized, mono-variate, real-valued time series.
|
|
The patterns (also called training samples) $P_j \in P$ are of the same type as $t$.
|
|
Each pattern $P_j$ can take any length denoted $N_j$.
|
|
A sample $t[i]$ \textit{matches} a pattern $P_j \in P$ if there exists a substring of $t$, the length of $P_j$, that includes the sample, such that a similarity measure between this substring and $P_j$ is below a pre-defined threshold.
|
|
The pattern $\lambda$ is the \textit{unknown} pattern assigned to the samples in $t$ that do not match any of the patterns in $P$.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.45\textwidth]{images/overview.pdf}
|
|
\caption{Illustration of the sample distance from one sample to each training example in a 2D space.}
|
|
\label{fig:overview}
|
|
\end{figure}
|
|
|
|
\section{Proposed Solution: MAD}\label{sec:solution}
|
|
\gls{mad}'s core idea separates it from other traditional sliding window algorithms.
|
|
In \gls{mad}, the sample window around the sample to classify dynamically adapts for optimal context selection.
|
|
This principle influences the design of the detector and requires the definition of new distance metrics.
|
|
Because the lengths of the patterns may differ, our approach requires distance metrics robust to length variations.
|
|
%For the following explanation, the pattern set $P$ refers to the provided patterns only $\{P\setminus \lambda\}$ --- unless specified otherwise.
|
|
We first define the fundamental distance metric as the normalized Euclidean distance between two-time series $a$ and $b$ of the same length $N_a=N_b$
|
|
\begin{equation}
|
|
nd(a,b) = \dfrac{EuclideanDist(a,b)}{N_a}
|
|
\end{equation}
|
|
|
|
Using this normalized distance $nd$, we define the distance from a sample $t[i]$ to a pattern $P_j \in P$.
|
|
This is the sample distance $sd$ defined as
|
|
\begin{equation}\label{eq:sd}
|
|
sd(i,P_j) = \min_{k\in [i-N_j,i+N_j])}(nd(t[i-k:i+k],P_j))
|
|
\end{equation}
|
|
|
|
%with $P_j$ the training sample corresponding to the state $j$, and $t$ the complete time series.
|
|
Computing the distance $sd(i,P_j)$ requires to: (1) select every substring of $t$ of length $N_j$ that contains the sample $t[i]$, (2) evaluate their normalized distance to the pattern $P_j$, and (3) consider $sd(i,P_j)$ as the smallest of these distances.
|
|
For simplicity, Equation~\ref{eq:sd} omits the border conditions for the range of $k$.
|
|
When the sample position $i$ is less than $N_j$ or greater than $N_t-N_j$, the range adapts to only consider valid substrings.
|
|
|
|
Our approach uses a threshold-based method to decide what label to assign to a sample.
|
|
For each sample in $t$, the algorithm compares the distance $sd(i,P_j)$ to the threshold $T_j$.
|
|
The sample receives the label $j$ associated with the pattern $P_j$ that results in the smallest distance $sd(i,P_j)$ with $sd(i,P_j)<T_j$.
|
|
|
|
The minimum distance from the pattern $P_j$ to all other patterns $P_l$ with $l\neq i$ --- denoted $ID_j$ --- forms the basis of the threshold $T_j$.
|
|
Intuitively, the patterns in $P$ represent most of the patterns expected in the trace.
|
|
Thus, to decide that a substring matches a pattern $P_j$, it must match $P_j$ better than any other pattern $P_l$ with $l\neq i$ does.
|
|
Otherwise, the algorithm would assign the substring to $P_j$ when the training pattern of another class matches $P_j$ better, which is counter-intuitive.
|
|
The inter-distance between $P_j$ to $P_l$, defined as
|
|
\begin{equation}
|
|
ID(P_j,P_l) = \min_{i\in[0,N_l-N_j]} nd(P_j,P_l[i:i+N_j])
|
|
\end{equation}
|
|
represents the smallest distance between $P_j$ and any substring of length $N_j$ from $P_l$ --- with $N_l>N_j$.
|
|
If $N_l<N_j$, then $ID(P_j,P_l) = ID(P_l,P_j)$.
|
|
When computing the inter-distance between two patterns, we slide the short pattern along the length of the long one and compute the normalized distance at every position to finally consider only the smallest of these distances as the inter-distance.
|
|
|
|
To fully define the threshold $T_j$, we introduce the shrinkage coefficient $\alpha$.
|
|
This coefficient, multiplied with the smallest inter-distance $ID_j$, forms the threshold $T_j$.
|
|
\begin{equation}
|
|
T_j = \alpha\times ID_j = \alpha \min_{l\in[1;k];l\neq j} \{ID(e_j,e_l)\}
|
|
\end{equation}
|
|
The shrinkage coefficient $\alpha$ provides some control over the confidence of the detector.
|
|
A small value shrinks the range of capture of each label more and will leave more samples classified as \textit{unknown}.
|
|
A large value leaves less area for the \textit{unknown} state and forces the detector to choose a label, even for samples unlike any pattern.
|
|
The \textit{unknown} label enables the detector to carry over the information of novelty to the output.
|
|
In cases where a substring does not resemble any pattern --- for example, in cases of anomalies or unforeseen activities ---, the ability to inform of novel patterns enables a more granular definition of security policies.
|
|
|
|
Finally, we assign to each sample the label of the closest pattern with a distance lower than its threshold.
|
|
\begin{equation}
|
|
s_i = \underset{j\in[1,k]}{\arg\min}(sd(i,e_j) \textrm{ with } sd(i,e_j)<T_j)
|
|
\end{equation}
|
|
In the case where no distance is below the threshold, the sample defaults to the \textit{unknown} state.
|
|
|
|
|
|
\subsection{Algorithm}
|
|
The algorithm for \gls{mad} follows three steps:
|
|
|
|
\begin{enumerate}
|
|
\item Compute the inter-distances and threshold values for the pattern set. The algorithm can reuse the result from this step for all following detection with the same pattern set.
|
|
\item For each sample $t[i]$, compute the sample distance to each pattern $\{sd(i,p) \forall p\in P\}$.
|
|
\item Select the label by comparing the sample distances to the threshold.
|
|
\end{enumerate}
|
|
|
|
However, directly implementing this suite of operations is not optimal as it requires computing the distance from any substring to any pattern multiple times --- exactly once per sample in the substring.
|
|
A more efficient solution considers each substring only once.
|
|
Iterating over the patterns rather than the samples is more efficient as it replaces distance computations with comparison operations.
|
|
The efficient implementation follows the operations:
|
|
|
|
\begin{enumerate}
|
|
\item Compute the inter-distances and threshold values for the pattern set --- no optimization at this step.
|
|
\item For every pattern $P_j$ of length $N_j$ in $P$, consider every substring of length $N_j$ in $t$ and compute the normalized distance $nd(t[i:i+N_j],P_j)$.
|
|
\item For every sample in the substring, store the minimum of the previously stored and newly computed normalized distance as the sample distance.
|
|
\item Select the label by comparing the sample distances to the thresholds.
|
|
\end{enumerate}
|
|
This results in the same final value for the sample distance $sd(i,P_j)$ with fewer computations of the normalized distance --- at the expense of cheaper comparison operations.
|
|
Algorithm~\ref{alg:code} presents the implementation's pseudo-code.
|
|
|
|
\begin{algorithm}
|
|
\caption{Pseudo code for state detection.}
|
|
\label{alg:code}
|
|
\begin{algorithmic}[1]
|
|
\Require $t$ the time series of length $N_t$, $P$ the set $n$ of patterns, $\alpha$ the shrinkage coefficient.
|
|
\BeginBox
|
|
\LComment{First part: computation of the thresholds.}
|
|
\State $interDistances \gets nilMatrix(n,n)$
|
|
\State $thresholds \gets nilList(n)$
|
|
\For{$i \in [0,n-1]$}
|
|
\For{$j \in [0,n-1]$}
|
|
\If{$i\neq j$ and $interDistance[i,j] \neq Nil$}
|
|
\State $dist \gets ID(P[i],P[j]$
|
|
\State $interDistances[i,j] \gets dist$
|
|
\State $interDistances[j,i] \gets dist$
|
|
\EndIf
|
|
\EndFor
|
|
\State $thresholds[i] \gets min(interDistances[i,:])$
|
|
\EndFor
|
|
\EndBox
|
|
|
|
\BeginBox
|
|
\LComment{Second part: computation of the distances.}
|
|
\State $distances \gets nilMatrix(n,N_t)$
|
|
\State $labels \gets nilList(N_t)$
|
|
|
|
\For{$i \in [0,n-1]$}
|
|
\For{$k \in [0,N_t-1]$}
|
|
\State $dist \gets nd(t[k:k+N_{P_i}], P_i)$
|
|
\For{$l\in [0,N_t-1]$}
|
|
\State $distances[i,k] \gets min(distances[i,k], dist)$
|
|
\EndFor
|
|
\EndFor
|
|
\EndFor
|
|
\EndBox
|
|
|
|
\BeginBox
|
|
\LComment{Third part: selection of the label based on the distances.}
|
|
\For{$k \in [0,N_t-1]$}
|
|
\State $rowMin \gets Nil$
|
|
\State $distanceMin \gets \infty$
|
|
\For{$i \in [0,n-1]$}
|
|
\If{$distances[i,k] \leq thresholds[i]$}
|
|
\State $rowMin \gets i$
|
|
\State $distanceMin \gets distances[i,k]$
|
|
\EndIf
|
|
\EndFor
|
|
\State $labels[k] \gets rowMin$
|
|
\EndFor
|
|
\EndBox
|
|
|
|
\State \Return $labels$
|
|
|
|
\end{algorithmic}
|
|
\end{algorithm}
|
|
|
|
|
|
\subsection{Analysis}
|
|
|
|
\textbf{Time-Efficiency:}
|
|
The time efficiency of the algorithm is expressed as a function of the number of normalized distance computations and the number of comparison operations.
|
|
Each part of the algorithm has its own time-efficiency expression, with Algorithm~\ref{alg:code} showing each of the three parts.
|
|
The first part, dedicated to the threshold computation, is polynomial in the number of patterns and linear in the length of each pattern.
|
|
The second part, in charge of computing the distances, is linear in the number of patterns, the length of the time series, and the length of each pattern.
|
|
Finally, the third part, focusing on the final label selection, is linear in both the length of the time series and the number of patterns.
|
|
Overall, the actual detection computation --- second and third parts --- is linear in all input sizes.
|
|
Adding an additional value to the time series triggers the computation of one more distance value per pattern, hence the linear relationship.
|
|
Similarly, lengthening a pattern by one triggers one more comparison operation for each substring of the time series, hence the linear relationship.
|
|
Concluding from the analysis, the additional operations introduced by \gls{mad} over the traditional \gls{1nn} do not significantly impact the time efficiency of the detection that remains linear.
|
|
|
|
\textbf{Termination:}
|
|
Every part of the algorithm terminates.
|
|
The first part iterates on the patterns with two nested loops over the samples of two patterns.
|
|
No instruction modifies the patterns that are all of finite lengths.
|
|
Thus the loops always terminate.
|
|
The second part iterates over the patterns and the time series with two nested loops.
|
|
Similarly to the first part, the time series is finite and never altered.
|
|
Thus the second part also terminates.
|
|
Finally, the third part uses the same loops as the second and also terminates.
|
|
Overall, \gls{mad} always terminates for any finite time series and finite set of finite patterns.
|
|
|
|
\textbf{Influence of $\alpha$: }
|
|
The shrink coefficient $\alpha$ is the only hyperparameter of the detector.
|
|
Its default value is one.
|
|
$\alpha$ controls the threshold of similarity that a substring should cross to get qualified as a match to a pattern.
|
|
$\alpha$ takes its value in $\mathbb{R}_*^+$.
|
|
This value follows the intuitive reasoning presented in Section~\ref{sec:solution}.
|
|
|
|
To better understand the influence of the shrink coefficient, the algorithm can be perceived as a 2D area segmentation problem.
|
|
Let us consider the 2D plane where each pattern has a position based on its shape (see Figure~\ref{fig:overview}).
|
|
A substring to classify also has a position in the plane and a distance to each pattern.
|
|
During classification, the substring takes the label of the closest pattern.
|
|
For any pattern $P_j$, the set of positions in the plane that is assigned to $P_j$ --- i.e., the set of positions for which $P_j$ is the closest pattern --- is called the area of attraction of $P_j$.
|
|
In a classic \gls{1nn} context, every point in the plane is in the area of attraction of one pattern.
|
|
|
|
This infinite area of attraction is not a desirable feature in this context.
|
|
Let us now consider a time series exhibiting anomalous or unforeseen behavior.
|
|
Some substrings in this time series do not resemble any of the provided patterns.
|
|
In an infinite area of attraction context, the anomalous points are assigned to a pattern, even if they poorly match it.
|
|
As a result, the behavior of the security rule can become unpredictable as anomalous points can receive a seemingly random label.
|
|
|
|
A more desirable behavior of the state detection system is to inform of the presence of unpredicted behavior.
|
|
This behavior naturally emerges when the areas of attraction of the patterns are limited to a finite size.
|
|
The shrink coefficient $\alpha$ --- through the modification of the threshold $T_j$ --- provides control over the shrink of the areas of attraction.
|
|
The lower the value of $\alpha$, the smaller the areas of attraction around each sample.
|
|
Applying a coefficient to the thresholds produces a reduction of the radius of the area of attraction, not an homothety of the initial areas.
|
|
The shrinkage does not preserve the shape of the area.
|
|
For a value $\alpha < 0.5$, all areas become disks --- in the 2D representation --- and all shape information is lost.
|
|
Figure~\ref{fig:areas} illustrates the areas of capture around the patterns for different values of $\alpha$.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.49\textwidth]{images/areas.pdf}
|
|
\caption{2D visualization of the areas of capture around each pattern as $\alpha$ changes. When $\alpha \ggg 2$, the areas of capture tend to equal these of a classic \gls{1nn}.}
|
|
\label{fig:areas}
|
|
\end{figure}
|
|
|
|
|
|
The influence of the $\alpha$ coefficient on the classification is monotonic and predictable.
|
|
Because $\alpha$ influences the thresholds, changing $\alpha$ results in moving the transitions in the detected labels.
|
|
A lower value of $\alpha$ expands the unknown segments while a higher value shrinks them until they disappear.
|
|
Figure~\ref{fig:alpha_impact} illustrates the influence that $\alpha$ has on the width of unknown segments.
|
|
The impact of $\alpha$ on the number of unknown samples is also monotonic.
|
|
|
|
\begin{proof}
|
|
We prove the monotony of the number of unknown samples as a function of $\alpha$ by induction.
|
|
The base case is $\alpha=0$.
|
|
In this case, the threshold for every pattern $P_j\in P$ is $T_j = \alpha\times ID_j = 0$.
|
|
With every $T_j=0$, no sample can have a distance below the threshold and every sample is labeled as \textit{unknown}.
|
|
|
|
For the induction case, let us consider $\alpha$ increasing from the value $\alpha_0$ to $\alpha_1 = \alpha_0 + \delta$ with $\delta \in \mathbb{R}_*^+$.
|
|
The increasing of $\alpha$ induces the increase of every threshold $T$ from the value $T_0$ to $T_1$
|
|
\begin{equation}
|
|
\alpha_0 <\alpha_1 \rightarrow T_0 < T_1
|
|
\end{equation}
|
|
|
|
For every value of every threshold $T$ we can define a set of all samples below the threshold as $S_T$.
|
|
When a threshold increases from $T_0$ to $T_1$, all the samples in $S_{T_0}$ also belong in $S_{T_1}$ by the transitivity of order in $\mathbb{R}_*^+$.
|
|
It is also possible for samples to belong to $S_{T_1}$ but not to $S_{T_0}$ if their distance falls between $T_0$ and $T_1$.
|
|
Hence, $S_{T_0}$ is a subset of $S_{T_1}$ and the cardinality of $S_T$ as a function of $T$ is monotonically non-decreasing.
|
|
|
|
We conclude that the number of unknown samples as a function of $\alpha$ is monotonically non-increasing.
|
|
\end{proof}
|
|
|
|
|
|
Figure~\ref{fig:alpha} presents the number of unknown samples in the classification of the NUCPC-1 time series based on the value of $\alpha$.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.49\textwidth]{images/alpha.pdf}
|
|
\caption{Evolution of the number of unknown samples based on the value of the shrink coefficient $\alpha$.}
|
|
\label{fig:alpha}
|
|
\end{figure}
|
|
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.49\textwidth]{images/alpha_impact.pdf}
|
|
\caption{Behavior of the classifier with different values of $\alpha$. A lower value of $\alpha$ expands the unknown sections (orange sections).}
|
|
\label{fig:alpha_impact}
|
|
\end{figure}
|
|
|
|
|
|
\section{Case Study 1: Comparison with Other Methods}\label{sec:cs1}
|
|
The first evaluation of \gls{mad} consists in the detection of the states for time series from various machines.
|
|
We evaluate the performances of the proposed solution against other traditional methods to illustrate the capabilities and advantages of \gls{mad}.
|
|
|
|
\subsection{Performance Metrics}
|
|
We considered two metrics to illustrate the performance of \gls{mad}.
|
|
Performance evaluations of labeling systems traditionally use the accuracy \cite{grandini2020metrics}.
|
|
Accuracy is defined as the number of correctly classified samples divided by the total number of samples.
|
|
However, accuracy only illustrates a part of the performances.
|
|
In the context of state detection, we are interested in taking actions depending on the state of a system.
|
|
Detecting the start and stop times of each state is not as important as detecting the correct list of occurrences of states.
|
|
We are interested in making sure that the state is detected, even at the cost of some time inaccuracy.
|
|
The Levenshtein distance~\cite{4160958} illustrates the classifier's performance at detecting the correct list of states from a time series.
|
|
The Levenshtein distance is defined as the number of single-character edits --- insertions, deletions or substitutions --- between two strings.
|
|
The Levenshtein distance could use the raw detected labels list as input.
|
|
However, the raw label list embeds state detection time information, which the Levenshtein distance is very sensitive to.
|
|
We first reduce the ground truth and the detected labels by removing immediate duplicate of labels.
|
|
This reduction removes timing information yet conserves the global order of state occurrences.
|
|
The Levenshtein distance between the ground truth and the detected labels is low if every state occurrence is correctly detected.
|
|
Similarly, the metric is high if occurrences are missed, added, or miss-detected.
|
|
To remove length bias and make the metric comparable across datasets, we normalize the raw Levenshtein distance and define it as
|
|
\begin{equation}
|
|
levacc = \dfrac{Levenshtein(rgtruth,rlabels)}{max(rN_t,rN_l)}
|
|
\end{equation}
|
|
with $rgtruth$ and $rlabels$ respectively the reduced ground truth and reduced labels and $rN_t$ and $rN_l$ their length.
|
|
The Levenshtein distance provides complementary insights on the quality of the detection in this specific use case.
|
|
Figure~\ref{fig:metrics} illustrates the impact of an error on both metrics.
|
|
It is important to notice that zero represents the best Levenshtein distance and one the worst --- contrary to the accuracy.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.49\textwidth]{images/metric.pdf}
|
|
\caption{Accuracy or Levenshtein distance alone are unable to illustrate all types of error. We consider both to provide a better evaluation of the performances.}
|
|
\label{fig:metrics}
|
|
\end{figure}
|
|
|
|
|
|
\subsection{Dataset}\label{sec:dataset}
|
|
We evaluate the performances of \gls{mad} against eight time series.
|
|
One is a simulated signal composed of sine waves of varying frequency and average.
|
|
Four were captured in a lab environment on consumer-available machines (two NUC PCs and two wireless routers).
|
|
Finally, two were extracted from the REFIT dataset \cite{278e1df91d22494f9be2adfca2559f92} and correspond to home appliances during real-life use.
|
|
Table~\ref{tab:dataset} presents the times series and their characteristics.
|
|
|
|
\begin{table}
|
|
\centering
|
|
\caption{Characteristics of the machines in the evaluation dataset.}
|
|
\begin{tabular}{lcc}
|
|
Name & Length & Number of states\\
|
|
\toprule
|
|
NUCPC-0 & 22700 & 11\\
|
|
NUCPC-1 & 7307 & 8\\
|
|
GENERATED & 15540 & 18\\
|
|
WAP-ASUS & 26880 & 18\\
|
|
WAP-LINKSYS & 22604 & 18\\
|
|
REFIT-H4A4 & 5366 & 17\\
|
|
REFIT-H4A1 & 100000 & 142\\
|
|
\bottomrule
|
|
\end{tabular}
|
|
\label{tab:dataset}
|
|
\end{table}
|
|
|
|
The dataset aims to provide diverse machine and state patterns to evaluate the performances.
|
|
For each time series, we generated the ground truth by manually labeling all sections of the time series using a custom-made range selection tool based on a Matplotlib \cite{Hunter:2007} application.
|
|
The dataset is publicly available \cite{zenodo}.
|
|
|
|
\textbf{Lab Captures:}
|
|
NUCPC-0, NUCPC-1, WAP-ASUS and WAP-LINKSYS correspond to lab-captured machine activity power consumption.
|
|
A commercial solution \cite{hidden-palitronica}, placed in series with the main power cable, measures the global power consumption of the machine.
|
|
We considered two types of machines.
|
|
The NUCPC-* are small form factor general-purpose computers.
|
|
The WAP-* are wireless access points from two different brands.
|
|
The states to detect on these computing machines are \textit{powered off}, \textit{boot sequence}, and \textit{on}.
|
|
With these states, it is possible to set up many security rules such as: \textit{"machine on after office hours"}, \textit{"X reboots in a row"} or \textit{"Coincident shutdown of Y machines within Z minutes"}.
|
|
|
|
\textbf{GENERATED:}
|
|
An algorithm generated the GENERATED time series following three steps.
|
|
First, the algorithm randomly selects multiple frequency/average pairs.
|
|
Second, the algorithm generates 18 segments by selecting a pair and a random length.
|
|
Finally, the algorithm concatenates the segments to form the complete time series.
|
|
The patterns correspond to a minimal length example of each pair.
|
|
This time series illustrates the capabilities of the proposed solution in a case where a simple threshold would fail.
|
|
|
|
\textbf{REFIT:}
|
|
In 215, D. Murray et al. \cite{278e1df91d22494f9be2adfca2559f92} created the REFIT dataset for \gls{nilm} research.
|
|
This dataset is now widely used in this research area.
|
|
REFIT is composed of the global consumption of 20 houses, along with the specific consumption of nine appliances per house.
|
|
The global house consumption does not fit the problem statement of this paper as multiple patterns overlap.
|
|
However, the individual consumption of some appliances fit the problem statement, and two were selected.
|
|
The REFIT-H4A1 is the first appliance of the fourth house and corresponds to a fridge.
|
|
The REFIT-H4A4 is the fourth appliance of the fourth house and corresponds to a washing machine.
|
|
The activity in this second time series was sparse with long periods without consumption.
|
|
The no-consumption sections are not challenging --- i.e., all detectors perform well on this type of pattern ---, and make the manual labeling more difficult, and level all results up.
|
|
For this reason, we removed large sections of inactivity between active segments to make the time series more challenging without tampering with the order of detector performances.
|
|
|
|
\subsection{Alternative Methods}
|
|
We implemented three alternative methods to compare with \gls{mad}.
|
|
The alternative methods are chosen to be well-established and of comparable complexity.
|
|
The methods are: a \gls{1nn} detector, an \gls{svm} classifier, and an \gls{mlp} classifier.
|
|
More complex solutions like \gls{rnn} or \gls{cnn} show good performances on time series analysis but require too much data to be applicable to one-shot classification.
|
|
All alternative methods rely on a sliding window to extract substring to classify.
|
|
The window is centred around the sample.
|
|
This choice --- or any other placement of the window --- implies that some samples corresponding to the length of the longest pattern remain unclassified toward the ends.
|
|
The stride of the window is a single sample to consider every possible window.
|
|
Each extracted window is sent to the classifier, and the result is applied to the sample at the center of the window.
|
|
The alternative detectors are not meant to handle variable-size time series as input.
|
|
For the \gls{svm} and \gls{mlp} detectors, the window size is shorter than the shortest pattern.
|
|
The training sample extraction algorithm slides the window along all patterns to extract all possible substrings.
|
|
These substrings constitute the training dataset with multiple samples per pattern.
|
|
The \gls{mlp} is implemented using Keras~\cite{keras} and composed of a single layer with 100 neurones.
|
|
The number of neurones was chosen after evaluating the accuracy of the \gls{mlp} on one of the dataset (NUCPC\_1) with varying numbers of neurones.
|
|
Similarly, the \gls{svm} detector is implemented using scikit-learn~\cite{sklearn} with the default parameters.
|
|
The \gls{1nn} considers one window per pattern length around each sample.
|
|
Every window is compared to its pattern, and the normalized Euclidean distance is considered for the decision.
|
|
Overall, it is possible to adapt the methods to work with variable length patterns, but \gls{mad} is the only pattern-length-agnostic method by design.
|
|
|
|
\subsection{Results}\label{sec:results}
|
|
The benchmark consists in detecting the label of every sample for each time series with each method and computing the performance metrics.
|
|
The detectors that require training (\gls{svm} and \gls{mlp}) were re-trained for every evaluation.
|
|
Figure~\ref{fig:res} presents the results.
|
|
\gls{mad} is consistently as or more accurate than the alternative method.
|
|
The Levenshtein distance illustrates how \gls{mad} provides smoother and less noisy labeling.
|
|
This stability introduces fewer state detection errors that could falsely trigger security rules.
|
|
With both performances metrics combined, \gls{mad} outperforms the other methods.
|
|
|
|
\begin{figure*}
|
|
\centering
|
|
\includegraphics[width=\textwidth]{images/dsd_acc.pdf}
|
|
\caption{Performances of the different methods on all the datasets.}
|
|
\label{fig:res}
|
|
\end{figure*}
|
|
|
|
|
|
\section{Case Study 2: Attack Scenarios}\label{sec:cs2}
|
|
The second case study focuses on a realistic production scenario.
|
|
This case study aims to illustrate how \gls{mad} enables high abstraction level rules applications by converting the low-level power consumption signal into labeled and actionable states sequence.
|
|
|
|
|
|
\subsection{Overview}
|
|
This second case study aims at illustrating the performances of the \gls{mad} detector on more realistic data.
|
|
To this extent, a machine was set up to perform tasks on a typical office work schedule composed of work hours, sleep hours, and maintenance hours.
|
|
The scenario comprises four phases:
|
|
|
|
\begin{itemize}
|
|
|
|
\item 1 Night Sleep: During the night and until the worker begins the day, the machine is asleep in S3 sleep state\cite{sleep_states}. Any other state than sleep is considered anomalous during this time.
|
|
\item 2 Work Hours: During work hours, little restriction is applied to the activity. Only a long period with the machine asleep is considered anomalous.
|
|
\item 3 Maintenance: During the night, the machine wakes up as part of an automated maintenance schedule. During maintenance, updates are fetched, and a reboot is performed.
|
|
\item 4 No Long High Load: At no point should there be a sustained high load on the machine. Given the scenario of classic office work, having all cores of a machine maxed out is suspicious. Violations of this rule are generated by running the program xmrig for more than 30 seconds. Xmrig is a legitimate crypto-mining software, but it is commonly abused by criminals to build crypto-mining malware.
|
|
\end{itemize}
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.49\textwidth]{images/2w_experiment.pdf}
|
|
\caption{Overview of the scenario and rules for the second case study. The rules are defined in table \ref{tab:rules}.}
|
|
\label{fig:2w_experiment}
|
|
\end{figure}
|
|
|
|
In order to reduce the experimentation and processing time, the daily scenario is compressed into 4 hours, allowing six runs per day and a processing time of only $\approx 4min$ per run.
|
|
Note that this compression of experiment time does not influence the results (the patterns are kept uncompressed) and is only for convenience and better confidence in the results.
|
|
Figure~\ref{fig:2w_experiment} illustrates the experiment scenario with both the real and compressed time.
|
|
|
|
The data capture follows the same setup as presented in the first case study.
|
|
A power measurement device is placed in series with the main power cable of the machine (a NUC micro-pc).
|
|
The measurement device captures the power consumption at 10 kilo-samples per second.
|
|
The pre-processing step downsamples the trace to 20 samples per second using a median filter.
|
|
This step greatly reduces the measurement noise and the processing time and increases the consistency of the results.
|
|
The final sampling rate of 20 samples per second was selected empirically to be around one order of magnitude higher than the typical length of the patterns to detect (around five seconds).
|
|
|
|
For each compressed day of the experiment (four hours segment, thereafter referred to as days), \gls{mad} performs state detection and returns a label vector.
|
|
This label vector associates a label to each sample of the power trace following the mapping: -~1 is UNKNOWN, 0 is SLEEP, 1 is IDLE, 2 is HIGH and 3 is REBOOT.
|
|
The training dataset comprises one sample per state, captured during the run of a benchmark script that interactively places the machine in each state to detect.
|
|
The script on the machine generates logs that serve as ground truth to verify the results of rule checking.
|
|
The traces and ground truth for each day of the experiment are available online \cite{name_hidden_for_peer_review_2023_8192914}.
|
|
Please note that day 1 was removed due to a scheduling issue that affected the scenario.
|
|
Figure~\ref{fig:preds} presents an illustration of the results.
|
|
The main graph line in the middle is the power consumption over time.
|
|
The lines colors represent the machine state predicted from the power consumption pattern.
|
|
Below the graph, two lines illustrate the labels vectors.
|
|
The top line is the predicted labels and can be interpreted as a projection of the power consumption line on the x-axis.
|
|
The bottom line is the label's ground truth, generated from the scenario logs.
|
|
We can already notice with this figure that the prediction is correct most of the time except for some noise around state transitions and uncertainty between idle and generic activities (represented as UNKNOWN).
|
|
The errors at transitions are explained by the training samples that focus on stable states and do not provide labels for transitions pattern.
|
|
A simple solution to avoid this issue would be to provide training patterns for state transitions.
|
|
The type of error foreshadows the good capabilities of this method for rules verification presented in more detail in Section~\ref{2wexp-results}.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.49\textwidth]{images/preds.pdf}
|
|
\caption{Labels prediction from MAD for a one (compressed) day scenrario.}
|
|
\label{fig:preds}
|
|
\end{figure}
|
|
|
|
\subsection{Security Rules}
|
|
Many rules can be imagined to describe the expected and unwanted behavior of a machine.
|
|
System administrators can define sophisticated rules to detect specific attacks or to match the typical activities of their infrastructure.
|
|
We selected 4 rules (see Table~\ref{tab:rules}) that are representative of common threats on \gls{it} infrastructures.
|
|
These rules are not exhaustive and are merely an example of the potential of converting power consumption traces to actionable data.
|
|
The rules are formally defined using the \gls{stl} syntax, which is bespoke for describing variable patterns with temporal components.
|
|
|
|
\begin{table*}
|
|
\centering
|
|
\caption{Security rules applied to the detected states of the machine. $s[t]$ represent the label at time $t$.}
|
|
\begin{tabular}{p{0.03\textwidth} | p{0.20\textwidth} | p{0.43\textwidth} | p{0.25\textwidth}}
|
|
Rule & Description & STL Formula & Threat\\
|
|
\toprule
|
|
1 & "SLEEP" state only & $R_1 := \square_{[0,1h]}(s[t]=0)$ & Machine takeover, Botnet\cite{mitre_botnet}, Rogue Employee\\
|
|
2 & No "SLEEP" for more than 8m. & $R_4 := \square_{[1h,2h40]} (s[t_0]=0 \rightarrow \lozenge_{[t_0,t_0+1h]}(s[t_0]=0))$ & System Malfunction\\
|
|
3 & Exactly one occurrence of "REBOOT" & $R_2 := \lozenge(s[t_0]=3) \cup (\neg \square_{[t_0,t_0+2h40]}(s[t]=3)$ & \gls{apt}\cite{mitre_prevent}, Backdoors\\
|
|
4 & No "HIGH" state for more than 30s. & $R_3 := \square (s[t_0]=2 \rightarrow \lozenge_{[t_0,t_0+30s]}(s[t]=2))$ & CryptoMining Malware \cite{mitre_crypto}, Ransomware\cite{mitre_ransomware}, BotNet\cite{mitre_botnet}\\
|
|
\bottomrule
|
|
\end{tabular}
|
|
\label{tab:rules}
|
|
\end{table*}
|
|
|
|
\subsection{Results}\label{2wexp-results}
|
|
The performance measure represents the ability of the complete pipeline (\gls{mad} and rule checking) to detect anomalous behavior.
|
|
The main metrics are the micro and macro $F_1$ score of the rule violation detection.
|
|
The macro-$F_1$ score is defined as the arithmetic mean over individual $F_1$ scores for a more robust evaluation of the global performance as described in \cite{opitz2021macro}.
|
|
Table~\ref{tab:rules-results} presents the performance for the detection of each rule.
|
|
The performances are perfect for this scenario without any false positive or false negative over 40 runs.
|
|
|
|
The perfect detection of more complex patterns like REBOOT illustrates the need for a system capable of matching arbitrary states.
|
|
Flat lines at varying average levels represent many common states from embedded systems.
|
|
If the only states to detect were OFF, ON and HIGH, then a simple threshold method would work wonders.
|
|
However, the REBOOT pattern is more complex.
|
|
The REBOOT resembles generic activities and crosses most of the same thresholds.
|
|
In order to consistently recognize it, the classifier must have, at its core, a pattern-matching mechanism.
|
|
This illustrates that \gls{mad} balances the tradeoff between simple, explainable and efficient on one side and capable, complete and versatile on the other.
|
|
|
|
\begin{table}
|
|
\centering
|
|
\caption{Performance of the complete rule violation detection pipeline.}
|
|
\begin{tabular}{lccc}
|
|
Rule & Violation Ratio & Micro-$F_1$ & Macro-$F_1$\\
|
|
\toprule
|
|
Night Sleep & 0.33 & 1.0 & \multirow{4}*{1.0} \\
|
|
Work Hours & 0.3 & 1.0 & \\
|
|
Reboot & 0.48 & 1.0 & \\
|
|
No Long High & 0.75 & 1.0 & \\
|
|
\bottomrule
|
|
\end{tabular}
|
|
\label{tab:rules-results}
|
|
\end{table}
|
|
|
|
\section{Discussion}\label{sec:discussion}
|
|
In this section, we highlight specific aspects of the proposed solution.
|
|
|
|
\textbf{Dynamic Window vs Fixed Windows: }
|
|
One of the core mechanisms of \gls{mad} is the ability to choose the best-fitting window to classify each sample.
|
|
This mechanism is crucial to overcome some of the shortcomings of a traditional \gls{1nn}.
|
|
It is essential to understand the advantages of this dynamic window placement to fully appreciate the performances of \gls{mad}.
|
|
Figure~\ref{fig:proof} illustrates a test case that focuses on the comparison between the two methods.
|
|
In this figure, the top graph represents the near-perfect classification of the trace into different classes by \gls{mad}.
|
|
To make the results more comparable, the $\alpha$ parameter of \gls{mad} was set to $\infty$ to avoid the distance threshold mechanism and focuses on the dynamic window placement.
|
|
The middle graph represents the classification by a \gls{1nn}, and it illustrates the three types of errors that \gls{mad} aims to overcome.
|
|
The bottom graph represents the predicted state for each sample by each method with -~1 the UNKNOWN state and $[0-4]$ the possible states of the trace.
|
|
|
|
\begin{itemize}
|
|
\item Transition Bleeding Error: Around transitions, \gls{1nn} tends to miss the exact transition timing and miss-classify samples.
|
|
This is explained by the rigidity of the window around the sample.
|
|
At the transition time, the two halves of the window are competing to match different states.
|
|
Depending on the involved states' shape, it may require more than half of the window to prefer the new state, leading to miss-classified samples around the transition.
|
|
In contrast, \gls{mad} will always choose a window that fully matches either of the states, and that is not across the transition, avoiding the transition error.
|
|
\item Out-of-phase Error: When a state is described by multiple iterations of a periodic pattern, the match between a window and the trace varies dramatically every half-period. When the window is in phase with the pattern, the match is maximal and \gls{1nn} perfectly fills its role. However, when the window and the pattern are out of phase, the match is minimal, and the nearest neighbor may be a flat pattern at the average level of the pattern. This error manifests itself through predictions switching between two values at half the period of the pattern. \gls{mad} avoids this error by moving the window by, at most, half a period to ensure a perfect match with the periodic pattern.
|
|
\item Unknown-Edges Error: Because of the fixed nature of the window of a \gls{1nn}, every sample that is less than a half window away from either end cannot be classified. This error is not so important in most cases where edge samples are less important, and many solutions are available to solve this issue. However, \gls{mad} naturally solve this issue by shifting the window only in the valid range until the edge.
|
|
\end{itemize}
|
|
|
|
There are other methods than \gls{mad} to solve these issues, like \gls{dtw} distance metric, padding, or labels post-processing.
|
|
However, this illustrates how \gls{mad} leverages at its core the dynamic window placement to dramatically improve the accuracy of the classification.
|
|
Dynamic window placement is a simple mechanism that does not involve complex and computationally expensive distance metrics like \gls{dtw} to improve matches.
|
|
This leaves the choice of the distance metric open for specific applications.
|
|
The dynamic window placement also avoids increased complexity by requiring the same number of distance computation as \gls{1nn}.
|
|
|
|
\begin{figure*}
|
|
\centering
|
|
\includegraphics[width=0.9\textwidth]{images/proof.pdf}
|
|
\caption{Classification comparison between MAD and 1-NN with examples of prediction error from 1-NN highlighted. The top graph is \gls{mad}, the middle graph is 1-NN, and the bottom graph is the prediction vector of both methods.}
|
|
\label{fig:proof}
|
|
\end{figure*}
|
|
|
|
|
|
|
|
\textbf{Limitations: }
|
|
The proposed method has some limitations that are important to acknowledge.
|
|
The current version of \gls{mad} is tailored for a specific use case.
|
|
The goal is to enable high-level security policies with a secure and reliable state detection of a machine from a time series.
|
|
The purpose of the state detection is not an anomaly or novelty detection at the time series level.
|
|
For this reason, the patterns to be detected by \gls{mad} bear some limitations.
|
|
First, the patterns must be distinct.
|
|
If two patterns share a significant portion of time series, \gls{mad} will have an issue leading to unstable results.
|
|
Second, the states must be hand selected.
|
|
The data requirement is extremely low --- only one sample per pattern --- so the selected samples must be reliable.
|
|
For now, a human expert decided on the best patterns to select.
|
|
While nothing is complicated in this selection, it is still a highly manual process that we hope to automate with future iterations.
|
|
Finally, the states must be consistent.
|
|
If a state has an unpredictable signature --- i.e., each occurrence displays a significantly different pattern ---, \gls{mad} will not be able to detect the occurrences reliably.
|
|
If a state has different patterns, it is possible to capture each variation as a distinct training sample to enable better detection.
|
|
The proposed solution is trivial to adapt for multi-shot detection, but the design decisions and implementation details are outside the scope of this paper.
|
|
|
|
\textbf{Extension to Multi-shot Classification: }
|
|
\gls{mad} is not limited to one-shot cases and can leverage more labeled data.
|
|
\gls{mad} is based on a \gls{1nn}, so the evolution to \gls{knn} is natural.
|
|
If more than one pattern is available for one state, \gls{mad} will apply the same detection method only with multiple patterns leading to the same label.
|
|
The number of training samples per class can be unbalanced, and the training samples within a class can have different lengths.
|
|
\gls{mad} preserves the versatility of a \gls{knn} solution in this regard.
|
|
|
|
\textbf{Time Efficiency: }
|
|
\gls{mad} remains time-efficient compared to a classic \gls{1nn}.
|
|
Although there are more operations to perform to evaluate all possible windows around a sample, the impact on detection time is small.
|
|
Over all the datasets considered, the time for \gls{mad} was, on average, 14\% higher than the time for the \gls{1nn}.
|
|
\gls{mad} is also slower than \gls{svm} and faster than \gls{mlp}, but comparison to other methods is less relevant as computation time is highly sensitive to implementation, and no optimization was attempted.
|
|
Finally, because \gls{mad} is distance-based and window-based, parallelization is naturally applicable and can significantly reduce the processing time.
|
|
|
|
|
|
\section{Conclusion}
|
|
We present \gls{mad} and its associated rule-verification pipeline, a novel solution to enable high-level security policy enforcement from side channel information.
|
|
Leveraging side channel information requires labeling samples to discover the state of the monitored system.
|
|
Additionally, in the use cases where side channels are leveraged, collecting large labeled datasets can be challenging.
|
|
\gls{mad} is designed around three core features: low data requirement, flexibility of the detection capabilities, and stability of the results.
|
|
Built as a variation of a traditional \gls{1nn}, \gls{mad} uses a dynamic window placement that always provides the most relevant context for sample classification.
|
|
One hyper-parameter, $\alpha$, controls the confidence of the detector and the tradeoff between un-classified and miss-classified samples.
|
|
The comparison to traditional state detection methods highlights the potential of \gls{mad} for the pre-processing of raw data for security applications.
|
|
|
|
\bibliographystyle{plain}
|
|
\bibliography{biblio}
|
|
|
|
\end{document}
|