Dynamic SAX

In this section, we assume that you do know the SAX tranformation principle. If you don't, please refer to this section.

We try here to adapt SAX transformation to suit our wish. Just a little reminder, we want here to discretize new captured data as soon as we get them in order to perform online prediction.
To avoid storing more and more data as and when we are capturing data, we are using some tricks that we explain in the following part.

First of all, we are using a sliding window whose size is $N$ where $N$ is constant. We chose to normalize data thanks to this window's mean value and stadard deviation. That implies to update these 2 parameters. To do that we will need to update these parameters each time we have a new point. To avoid using mean and std function each time (they could be really greedy and we try to do that as fast as possible) we are using some properties about mean and variance.

Let $ts_n = \{x_1, x_2, ..., x_n\}$ be defined as our dynamic time serie of length N at time $n$.
The mean value and variance of this time serie are defined as follow $$\mu_n = \frac{1}{N}\sum_{i = 1}^nx_i \qquad \sigma_n^2 = \frac{1}{N}\sum_{i = 1}^n(x_i - \mu_n)^2 = \frac{1}{N}\Big(\sum_{i = 1}^nx^2_i - N\mu_n^2\Big). \quad (1)$$ Let assume that for this iteration, the new incoming point is $x_{n+1}$ and that the older one is $x_1$. We will have $$\mu_{n+1} = \frac{1}{N}\sum_{i = 2}^{n+1}x_i = \mu_n + \frac{x_{n+1} - x_1}{N}$$

Concerning the variance it's slighlty more complicated : $$\sigma_{n+1}^2 = \frac{1}{N}\sum_{i = 2}^{n+1}(x_i - \mu_{n+1})^2 = \frac{1}{N}\sum_{i = 2}^{n+1}(x_i^2 - 2\mu_{n+1}x_i + \mu^2_{n+1}) = \frac{1}{N}\Big(\sum_{i = 2}^{n+1}x_i^2 - 2\mu_{n+1}\sum_{i = 2}^{n+1}x_i + N\mu^2_{n+1}\Big) = \frac{1}{N}\Big(\sum_{i = 2}^{n+1}x_i^2 - N\mu^2_{n+1} \Big).$$ Using the previous result we can developp the expression $$\sigma^2_{n+1} = \frac{1}{N}\Big(\sum_{i = 2}^{n+1}x_i^2 - N\big(\mu_n + \frac{x_{n+1} - x_1}{N}\big)^2 \Big) = \frac{1}{N}\Big(\sum_{i = 2}^{n+1}x_i^2 - N\big(\mu^2_n + 2\mu_n\frac{x_{n+1} - x_1}{N} + \frac{(x_{n+1} - x_1)^2}{N^2}\big) \Big) \\ = \frac{1}{N}\Big(\sum_{i = 2}^{n+1}x_i^2 - N\mu^2_n -2\mu_n(x_{n+1} - x_1) - \frac{(x_{n+1} - x_1)^2}{N} \Big).$$

Using $(1)$ and the fact that $\sum_{k=2}^{n+1}x^2_i = \sum_{i = 1}^nx^2_i - x^2_1 + x^2_{n+1}$ we finally get that $$\sigma^2_{n+1} = \sigma^2_n + \frac{1}{N}\big(x^2_{n+1} - x^2_1 - 2\mu_n(x_{n+1} - x_1) - \frac{(x_{n+1} - x_1)^2}{N}\big) .$$

All this previous mathematical part shows one interestinf thing : we don't need to compute each time the mean value and the stadard deviation. As a matter of fact, we showed that to update the sliding window's global parameters we only need the incoming and outcoming point and the previous standard deviation and mean value !

Now that we can normalize (and unormalize) data each time we have a new point, we can perform SAX transformation having in mind that we need to modify slightly the way we are doing our PAA transform. If you want to see the code, please go on our repository. If you want to learn how to use this code or the results on real data, please refer to the proper tutorial