Comments (10)
I remember now! In one of the other MDL comments, I had asked the question:
What happens if the subsequence were not z-normalized at all?
Michael Yeh actually responded:
As split_pt stores z-scores, the discretization function won't work if the time series is not z-normalized.
I would suggest using the min max discretization function for time series without normalization with the min and max estimated from the whole time series (instead of the subsequence). It may require further exploration to determine the ideal discretization function under different scenarios.
Then, I later discovered:
For non-normalized time series, It looks like I can apply Definition 3 from this paper to discretize the time series
So, at least it is based on published work. I still don't feel comfortable discretizing a subsequence that is transformed by T_add
... as I don't know the motivation for the discretization function and what makes a "good" discretization function
from stumpy.
"Transformer" sounds too vague/non-specific/open-ended.
Definitely right! 😄 T_A_add
(T_B_add
) is a better choice by far!
from stumpy.
We will also need to consider what this means for the multi-dimensional case and it's affect on the MDL (minimum description length)
from stumpy.
In aamp_mmotifs.py
, there is a call to the function maap_mdl
. We can see the following block of code in maap_mdl
.
Lines 275 to 281 in 3559b38
And the function _maamp_discretize
is:
Lines 165 to 195 in 3559b38
And then, in maap_mdl
, we have:
Lines 293 to 296 in 3559b38
from stumpy.
I have no idea. I can't remember why we used Min Max Scaling for the discretization :(
I will do some research to see if I can find something.
I'm not sure where this statement came from
And, this is from you who is very careful. And, now I can better see why you try to be very careful in things that need to be maintained later. Even with that level of due diligence, there are things that may go slightly wrong (By "wrong", I do not mean incorrect. I mean it may lose its clarity).
from stumpy.
The two examples above can be covered by the general case below:
For a given offset array alpha
, transform the i-th subsequence, T[i:i+m]
, to T[i:i+m] - alpha[i]
, and compute the matrix profile accordingly.
If our distance function is Euclidean Distance (i.e. p==2
in p-norm distance), then I think we can revise the code to achieve this in efficient way, and preserve code's maintainability as well.
Let's say we have two subsequences m
.
And, let's say we want to just apply some offset to them before computing their distance,
e.g applying
where dist
is the Euclidean distance function.
Note that the firs term is just
Let alpha
be an array of size len(T) - m + 1
, where alpha[i]
is the offset we want to apply to the i-th subsequence with length m
. Then, we need to change this block of code
Lines 121 to 140 in 3559b38
to this:
if uint64_i == 0 or uint64_j == 0:
# KEEP Existing Code
p_norm = (
np.linalg.norm(
T_B[uint64_j : uint64_j + uint64_m]
- T_A[uint64_i : uint64_i + uint64_m],
ord=p,
)
** p
)
# [ADD New Code] POST-PROCESS, ONLY when p == 2, and there is user-defined offset
p_norm = (
p_norm
+ m * (alpha[uint64_i] - alpha[uint64_j]) ** 2
- 2 * m * (alpha[uint64_i] - alpha[uint64_j]) * (μ_A[uint64_i] -μ_B[uint64_j])
)
else:
# Let's call the current p-norm distance `y`. It is the Euclidean distance after applying the "offset" transformation
# and `x` is the p-norm distance with no transformation
# Use the current y to compute its corresponding x (i.e. reverting the post-process in previous iteration)
# Then compute new x.
# Finally, compute new y.
# [ADD New Code] PRE-PROCESS, ONLY when p == 2, and there is user-defined offset
p_norm = (
p_norm
- m * (alpha[uint64_i - uint64_1] - alpha[uint64_j - uint64_1]) ** 2
+ 2 * m * (alpha[uint64_i - uint64_1] - alpha[uint64_j - uint64_1]) * (μ_A[uint64_i - uint64_1] -μ_B[uint64_j - uint64_1])
)
# KEEP Existing Code
p_norm = np.abs(
p_norm
- np.absolute(T_B[uint64_j - uint64_1] - T_A[uint64_i - uint64_1])
** p
+ np.absolute(
T_B[uint64_j + uint64_m - uint64_1]
- T_A[uint64_i + uint64_m - uint64_1]
)
** p
)
# [ADD New Code] POST-PROCESS, ONLY when p == 2, and there is user-defined offset
p_norm = (
p_norm
+ m * (alpha[uint64_i] - alpha[uint64_j]) ** 2
- 2 * m * (alpha[uint64_i] - alpha[uint64_j]) * (μ_A[uint64_i] -μ_B[uint64_j])
)
from stumpy.
Considering that we are only adding either a (negative) constant or some pre-determined set of values for each subsequence, I think we should consider new parameters called T_A_add
and T_B_add
(and possibly extend to T_A_divide
and T_B_divide
where possible in the future). "Transformer" sounds too vague/non-specific/open-ended. Doing X_add
purposely limits what is possible. What do you think @NimaSarajpoor?
from stumpy.
We will also need to consider what this means for the multi-dimensional case and it's affect on the MDL (minimum description length)
I tried to go through the source code. I realized that, in non-normalized MDL, we obtain the global min
and max
, and then we convert the subsequences as follows: (subsequence - min) / (max - min)
.
I think that we might be able to use the same approach after applying T_A_add
to T_A
. So, our max
can be the max
of rolling max
of subsequeces, and our min
will be the min
of rolling min
. (I think, at the end of the day, we want to apply the same transform to subsequence when we want to compute MDL).
But, how to test this proposal?
(1) We try to find a problem and then use the existing code aamp_mmotifs.py
. Then, let's say min = 0
, and max = 1
. We can try a new min that is lower than min
, and a new max that is larger than max
. This can help us understand if the main purpose of this transformation is to make sure that (i) final values are being limited to a certain range, and (ii) we are using the same transform for all subsequences.
(2) We insert two similar patterns but with the same average, say 0, to a randomly-generated time series data, and then we try to see if we can detect It using the existing code. Then, if it works, we can move up / down one of the motifs, and try to capture those again using T_A_add
.
One thing that is certain is that MDL is complicated 😄
from stumpy.
I tried to go through the source code. I realized that, in non-normalized MDL, we obtain the global min and max, and then we convert the subsequences as follows: (subsequence - min) / (max - min).
Where exactly are you seeing this? I'm looking at maamp.mdl
and core._mdl
and I'm not seeing (subsequence - min) / (max - min)
from stumpy.
I think that we might be able to use the same approach after applying T_A_add to T_A. So, our max can be the max of rolling max of subsequeces, and our min will be the min of rolling min. (I think, at the end of the day, we want to apply the same transform to subsequence when we want to compute MDL).
I have no idea. I can't remember why we used Min Max Scaling for the discretization :(
This distribution is best suited for non-normalized time seris data
I'm not sure where this statement came from 😆
from stumpy.
Related Issues (20)
- Speeding up the computation of sliding dot product with fft HOT 90
- Trying to use AB Join to compare two well log curves called Gamma Ray (GR) and pink major trends with similar magnitudes HOT 2
- Some issues in Tutorial_Multidimensional_Motif_Discovery and MDL HOT 6
- Ts_subseq_isconstant parameter in ostinato algorithm HOT 2
- Faster Non-normalized Matrix Profiles
- `setuptools` Removed From Python 3.12
- Remove `pkg_resources` Dependency HOT 1
- Numba Python Version Compatibility Update
- Numba Python Version Compatibility Update
- Numba Python Version Compatibility Update
- Incorrect MIN Python Version in README HOT 11
- Bump Supported Python Version to 3.12
- Improving Multi-processor Performance HOT 3
- No module named 'pkg_resources' HOT 2
- Cache-Oblivious Algorithms (COA) HOT 2
- Add Named Attributes to Matrix Profile HOT 4
- Optimizing 6-step FFT algorithm HOT 12
- NumPy Installation Fails
- Add additional checks to `min.py` HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from stumpy.