Giter VIP home page Giter VIP logo

attractors.jl's Introduction

JuliaDynamics

This repository serves the following purposes:

  • Contains the source code for the JuliaDynamics website in the src and build folders.
  • Hosts the website via GitHub-pages and Jekyll.
  • Contains tutorials for all packages of JuliaDynamics in the tutorials folder.
  • Contains video resources for all packages of JuliaDynamics in the videos folder.

The website was modeled after the website of QuantumOptics.jl and most code that builds the site was re-used from the repository of QuantumOptics.jl (with permission).


To build locally do follow the instructions from here: https://jekyllrb.com/docs/

(install Jekyll and then do bundle exec jekyll serve which serves by default to http://localhost:4000)

attractors.jl's People

Contributors

awage avatar datseris avatar github-actions[bot] avatar jay-sanjay avatar kalelr avatar reykboerner avatar spaette avatar stefanvaylbx2023 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

attractors.jl's Issues

Simpler and clearer grouping configuration for AttractorsViaFeaturizing

The standard way to group features in the AttractorsViaFeaturizing method is to clusterize them using DBSCAN. As we've known for some time, the algorithm is more or less a black box, and so it can be quite hard to debug it. Non-optimal parameters can lead to (i) finding the same attractor more than once (it might separate two trajectories on the same attractor that have slightly different feature values), or (ii) not finding one or more attractors that are there (so grouping 2+ attractors together). Sometimes both can occur for the same parametrization. In particular for the projects I've been working on, it has been very frustrating to figure out an optimal parametrization, as you really need to look deep into all details of the computations.

As I understand, DBSCAN is great if you have noisy data, then it works great for identifying the clouds (clusters) of higher density. But in a wide range of applications I have in mind, this doesn't seem necessary, since

  1. The systems are deterministic
  2. I know good features to separate the attractors
  3. I can run the trajectories for sufficiently long so that features on the same attractor differ very little.

As a consequence, feature space is composed of very small and dense clouds with a wide separation between themselves. DBSCAN is thus not really needed, we can use something much simpler / more intuitive. I've been having success with a simple comparison: just comparing the distances between each feature, and separating them if that distance is higher than a threshold.

function _cluster_features_into_labels_comparing(features, config, distance_threshold; par_weight::Real = 0, plength::Int = 1, spp::Int = 1)
    labels_features = Vector{Int64}(undef, length(features)) #labels of all features
    labels_features[1] = 1
    clusters_idxs = [1] #idxs of the features that define each cluster
    cluster_labels = [1] #labels for the clusters, going from 1 : num_clusters
    next_cluster_label = 2
    metric = config.clust_distance_metric
    for idx_feature = 2:length(features)
        feature = features[idx_feature]
        dist_to_clusters = Dict(cluster_labels[idx] => evaluate(metric, feature, features[clusters_idxs[idx]]) for idx in eachindex(cluster_labels))
        dist_vals = collect(values(dist_to_clusters))
        min_dist, idx_min_dist = findmin(dist_vals)
        
        if min_dist > distance_threshold #bigger than threshold => new attractor
            push!(clusters_idxs, idx_feature)
            push!(cluster_labels, next_cluster_label)
            feature_label = next_cluster_label
            next_cluster_label += 1
        else #smaller => assign to closest cluster 
            idx_closest_cluster = collect(keys(dist_to_clusters))[idx_min_dist]
            feature_label = idx_closest_cluster
        end
        
        labels_features[idx_feature] = feature_label 
    end 
    return labels_features
end

So far I'm naming this grouping config as GroupingViaComparing. It has been working much better than DBSCAN, as I know exactly what it's doing all the time, so it's easier to figure a good parameter (which is just one: the maximum distance between in-cluster features) and there are no unexpected behaviors.

Note also that it has some nice advantages:

  1. features could very well just be the attractors themselves; then the distance metric could be Hausdorff() for instance, meaning we would distinguish attractors directly by their distances in state space, skipping entirely the need for features. Of course this would be slower, it can be parallelized and maybe the distances don't need to be computed for lots of time points
  2. by skipping the computation of the pairwise distance matrix, we also avoid the memory problems from DBSCAN

It needs more work and definitely more testing but I'm so far quite happy. What do you think?

Rename keywords of `AttractorsViaRecurrences`

I believe the names of the keywrods of AttractorsViaRecurrences are not the best. The problems I notice are:

  1. They use too many abbreviations
  2. They are inconsistent: mx_chk_hit_bas vs mx_chk_att. Why one has hit and the other not?
  3. They are mis-named. The keywords don't quantify the "maximum" number of consecutive hits. They quantify the required number of consecutive hits.
  4. They never mention "recurrence" even though that is the central concept.

I propose the following renames:

  • mx_chk_fnd_att -> consecutive_recurrences
  • mx_chk_loc_att -> attractor_locate_steps
  • mx_chk_att -> consecutive_attractor_steps
  • mx_chk_hit_bas -> consecutive_basin_steps
  • mx_chk_lost -> consecutive_lost_steps
  • mx_chk_safety -> maximum_iterations

If we agree with these, once we are done with PR #103 , I will open a new PR that does the rename and add deprecations in place.

Match attractors by BasinEnclosure

Idea

So far in the (global) continuation methods, the attractors are matched by some arbitrarily-defined distance function between them. This is quite nice, but isn't necessarily the best method in all scenarios. I've been working on some highly multistable systems in which the attractors can be quite close in state space and also feature space. Tracking the attractors properly with some distance function has proved very difficult.
Instead, also following literature on this, I decided it makes more sense to track the attractors differently. For an attractor A at parameter p1 and attractor B at p2>p1, I would like the algorithm to match A and B if a trajectory starting on the endpoint of A converges to B when p = p2. In this sense, to consider match attractors when the basin of B includes A, or at least part of A.

Implementation idea

I have implemented one way of doing this, but I'm pretty sure it is not the most efficient. The code is also not the very well written, but due to time constraints I cannot work on improving this now. In any case, because of PR #132 I will share my code here.

The basic idea is to use the following matching function:

function distance_matching_endpoint(A,B) #After and Before
    dist =  evaluate(Euclidean(), A[1], B[end])
    return dist
end

which will match attractors if the endpoint of the previous attractor is the same as the starting point of the subsequent attractor. Then, we need to implement a function that will achieve this, by:

  1. Running the continuation as normal
  2. For each attractor found in the continuation, go to the first parameter p1 in which it was found; then, use its endpoint as the initial condition for a trajectory integrated at the subsequent parameter p2 - this generates what I called att_future_integrated
  3. Find, in p2, the attractor that corresponds to this trajectory. To do this, match att_future_integrated with the already found attractors in p2 (in the code, I called them atts_future), using some distance function. If the attractor at p1 continues to exist at p2, this att_future_integrated should be very similar (basically the same) as one of the already-found attractors at p2 - they should be the same attractor. So this distance should basically be zero. If the attractor at p1 ceases to exist, the trajectory will converge to another attractor. Then there are two possibilities. First, the attractor to which it converged exists at p1. So here two attractors at p1 converge to the same attractor at p2, I call them coflowing attractors. Second, the attractor to which it converged emerges at p2. Here, the attractor at p1 ceases to exist, and another attractor emerges at p2.
  4. If there are coflowing attractors, I consider that matching should be done between the attractor at p1 that is closest to the the attractor at p2. This is implemented in _find_coflowing and _key_legitimate.
  5. After deciding the corresponding attractor at p2, replace it by the trajectory starting at the endpoint of the attractor at p1.

I'm sorry if this is too confusing! I myself was confused writing it, so I guess the explanation and code reflect that. I'll reiterate the basic idea, though: keep the continuation and matching function as they are. Only add a new function that essentially replaces the attractors with a trajectory that starts at the endpoint of the attractors at the previous parameter. Then, simply apply the matching function using distance_matching_endpoint.

Code

function replace_by_integrated_atts(ds, featurizer,atts_all, prange, pidx; T, Ttr, Δt, coflowing_threshold=0.1)
    all_keys = unique_keys(atts_all)
    atts_new = deepcopy(atts_all)
    for idx in 1:length(prange)-1
        p_future = prange[idx+1]
        ds_copy = deepcopy(ds)
        set_parameter!(ds_copy, pidx, p_future)
        atts_current = atts_new[idx]
        atts_future =  atts_new[idx+1]
        atts_future_integrated  = Dict(k=>trajectory(ds_copy, T, att_current[end]; Ttr, Δt)[1] for (k, att_current) in atts_current) 
        ts = Ttr:Δt:T
        keys_ilegitimate_all = Int64[]
        
        feats_current = Dict(k=>featurizer(att, ts) for (k, att) in atts_current)
        feats_future = Dict(k=>featurizer(att, ts) for (k, att) in atts_future)
        feats_future_int = Dict(k=>featurizer(att, ts) for (k, att) in atts_future_integrated)

        for (k_future_int, att_future_int) in atts_future_integrated
            if k_future_int  keys_ilegitimate_all 
                continue
            end
            key_matching_att = _find_matching_att(featurizer, ts, att_future_int, atts_future)
            
            #before replacing, must check if att_future_int is legitimate (and doesn't come from an att that disappeared!)
            keys_coflowing = _find_coflowing(featurizer, ts, att_future_int, k_future_int, atts_future_integrated, coflowing_threshold)
            if length(keys_coflowing) == 1 #no coflowing, att is legitimate 
                att_replace = att_future_int 
            else #coflowing 
                key_legitimate = _key_legitimate(featurizer, ts, att_future_int, atts_current)
                if key_legitimate == k_future_int #check if this is legitimate, i.e. if it is close to a previousy existing att 
                    att_replace = att_future_int 
                    keys_ilegitimate = filter(x->x==key_legitimate, keys_coflowing)
                    push!(keys_ilegitimate_all, keys_ilegitimate...)
                else  #"$k_future_int is coflowing but not legitimate"
                    continue
                end
            end
            
            atts_new[idx+1][key_matching_att] = att_replace 
        end
    end
    return atts_new 
end

function keys_minimum(d)
    vals = values(d)
    ks = keys(d)
    min = minimum(vals)
    idxs_min = findall(x->x==min, collect(vals))
    keys_min = collect(ks)[idxs_min] 
end

function _find_matching_att(featurizer, ts, att_future_int, atts_future) 
    f1 = featurizer(att_future_int, ts) 
    dists_att_to_future = Dict(k=>evaluate(Euclidean(), f1, featurizer(att_future, ts)) for (k, att_future) in atts_future) 
    label_nearest_atts = keys_minimum(dists_att_to_future) 
    if isempty(label_nearest_atts)
        @error "No matching attractor. Shouldn't happen!"
    elseif length(label_nearest_atts) == 1 
        key_matching_att = label_nearest_atts[1] 
    else #coflowing attractors 
        @warn "Coflowing attractors. This will be dealt with later."
    end
    return key_matching_att
end

function _find_coflowing(featurizer, ts, att_current, k_current, atts, coflowing_threshold) #coflowing if dist(att, atts) <= coflowing_th
    f1 = featurizer(att_current, ts)
    dist_to_atts = Dict(k=>evaluate(Euclidean(), f1, featurizer(att, ts)) for (k, att) in atts) #includes it to itself
    idxs_coflowing = findall(x->x<=coflowing_threshold, collect(values(dist_to_atts)))
    keys_coflowing = collect(keys(dist_to_atts))[idxs_coflowing]
    return keys_coflowing
end

function _key_legitimate(featurizer, ts, att_current, atts_prev)
    f1 = featurizer(att_current, ts)
    dist_to_prev = Dict(k=>evaluate(Euclidean(), f1, featurizer(att_prev, ts)) for (k, att_prev) in atts_prev)
    ks_legitimate = keys_minimum(dist_to_prev)
    # @info "finding legitimate, $(dist_to_prev), $ks_legitimate"
    if length(ks_legitimate) == 1 
        return ks_legitimate[1]
    else 
        @warn "Two legitimates. Deal with it."
    end 
end

typos

Convenience
Minimal
Silhouettes
Unknown
accumulated
approximation
arbitrarily
assigns
attractor
construct
converged
choose
deterministic
discretization
dimensional
disappear
disappears
explicitly
function
guaranteed
halt
halting
implemented
initializes
internal
just
original
originally
output
penalize
penalized
perturbation
perturbations
silhouette
statistics
system
system
threshold
unnecessary

$ grep -nr Convinience Attractors.jl
Attractors.jl/src/plotting.jl:89:Convinience combination of [`plot_basins_curves`](@ref) and [`plot_attractors_curves`](@ref)
$ grep -nr Mimimal Attractors.jl
Attractors.jl/docs/src/basins.md:39:## Mimimal Fatal Shock
$ grep -nr Silhuettes Attractors.jl
Attractors.jl/src/mapping/grouping/cluster_optimal_ϵ.jl:2:# Calculate Silhuettes
$ grep -nr Unkown Attractors.jl
Attractors.jl/src/mapping/grouping/cluster_optimal_ϵ.jl:60:        error("Unkown `optimal_radius_method`.")
$ grep -nr accummulated Attractors.jl
Attractors.jl/src/mapping/recurrences/finite_state_machine.jl:155:        # If we accummulated enough recurrences, we claim that we
$ grep -nr aproximation Attractors.jl
Attractors.jl/src/basins/fractality_of_basins.jl:130:    # Systematic error aproximation for the disk of radius ε
$ grep -nr arbitriraly Attractors.jl
Attractors.jl/src/basins/fractality_of_basins.jl:21:another initial condition that lead to another basin arbitriraly close. It provides also
$ grep -nr assings Attractors.jl
Attractors.jl/src/mapping/attractor_mapping.jl:163:`labels` returned by [`basins_fractions`](@ref) and simply assings them to a full array
$ grep -nr attarctor Attractors.jl
Attractors.jl/docs/src/examples.md:830:                    # color-code initial condition if we convegerd to attarctor
$ grep -nr constuct Attractors.jl
Attractors.jl/CHANGELOG.md:19:- New algorithm `subdivision_based_grid`. Allows user to automatically constuct the grid which simulates subdivision into regions with different discretization levels in accordance with state space flow speed.
$ grep -nr convegerd Attractors.jl
Attractors.jl/docs/src/examples.md:830:                    # color-code initial condition if we convegerd to attarctor
$ grep -nr coose Attractors.jl
Attractors.jl/docs/src/examples.md:715:    # will coose initial conditions that are only in the first 2 attractors
$ grep -nr determistic Attractors.jl
Attractors.jl/test/mapping/basins_of_attraction.jl:21:# determistic grid, we know exactly the array layout
$ grep -nr dicretization Attractors.jl
Attractors.jl/docs/src/examples.md:353:The constructed array corresponds to levels of dicretization for specific regions of the grid as a powers of 2,
$ grep -nr dimesional Attractors.jl
Attractors.jl/src/continuation/aggregate_attractor_fractions.jl:20:analysis purposes. For example, a high dimesional model of competition dynamics
$ grep -n dissapear Attractors.jl/docs/src/index.md
15:- New function [`match_continuation!`](@ref) which improves the matching during a continuation process where attractors dissapear and re-appear.
$ grep -n dissapears Attractors.jl/src/continuation/match_attractor_ids.jl
223:        # Here we always compute a next id. In this way, if an attractor dissapears
$ grep -nr explitily Attractors.jl
Attractors.jl/ext/plotting.jl:7:    # we need to explitily add here some default colors...
$ grep -nr funtion Attractors.jl
Attractors.jl/src/mapping/grouping/cluster_config.jl:118:# API funtion (group features)
$ grep -nr guaranteeed Attractors.jl
Attractors.jl/src/continuation/match_attractor_ids.jl:80:    # but also ensure that keys that have too high of a value distance are guaranteeed
$ head -n 30 Attractors.jl/src/mapping/recurrences/finite_state_machine.jl \         
> | grep hault
        # This clause here is added because sometimes the algorithm will never hault
$ grep -nr haulting Attractors.jl
Attractors.jl/src/mapping/recurrences/finite_state_machine.jl:39:            # `AttractorsViaRecurrences` algorithm exceeded safety count without haulting.
$ grep -nr imlemented Attractors.jl
Attractors.jl/src/mapping/attractor_mapping.jl:140:extract_attractors(::AttractorMapper) = error("not imlemented")
$ grep -nr initialzes Attractors.jl
Attractors.jl/src/mapping/grouping/attractor_mapping_featurizing.jl:151:# initialzes integrators for any given kind of system. But that can be done
$ grep -nr interal Attractors.jl
Attractors.jl/src/mapping/recurrences/attractor_mapping_recurrences.jl:161:    # call signature the interal basins info array of the mapper is NOT updated
$ grep -nr juuuuust Attractors.jl
Attractors.jl/test/basins/tipping_points_tests.jl:9:    # Also test analytically resolved juuuuust to be sure
$ grep -nr "orginal " Attractors.jl
Attractors.jl/test/basins/uncertainty_tests.jl:20:@testset "orginal paper" begin
$ grep -nr "orginally " Attractors.jl
Attractors.jl/src/mapping/recurrences/grids.jl:61:orginally coarse grid (a tuple of `AbstractRange`s).
$ grep -nr ouput Attractors.jl
Attractors.jl/src/basins/fractality_of_basins.jl:163:is a vector with the corresponding size of the balls. The ouput `d` is the estimation
Attractors.jl/src/basins/fractality_of_basins.jl:219:The ouput `α` is the estimation of the uncertainty exponent using the box-counting
$ grep -nr "penaltize " Attractors.jl
Attractors.jl/src/tipping/mfs.jl:186:        # penaltize if we stay in the same basin:
$ grep -nr "penaltized " Attractors.jl
Attractors.jl/src/tipping/mfs.jl:178:The algorithm uses BlackBoxOptim.jl and a penaltized objective function to minimize.
$ grep -nr "pertubation " Attractors.jl
Attractors.jl/src/tipping/mfs.jl:52:obtained pertubation it proceeds to the second step. On the second step, the algorithm
Attractors.jl/src/tipping/mfs.jl:54:norm of the pertubation found in the first step.
Attractors.jl/src/tipping/mfs.jl:90:This function generates a random pertubation of the initial point `u0` within
Attractors.jl/src/tipping/mfs.jl:93:If the pertubation is not in the same basin of attraction, it calculates the norm
Attractors.jl/src/tipping/mfs.jl:94:of the pertubation and compares it to the best pertubation found so far.
Attractors.jl/src/tipping/mfs.jl:95:If the norm is smaller, it updates the best pertubation found so far.
Attractors.jl/src/tipping/mfs.jl:96:It repeats this process total_iterations times and returns the best pertubation found.
Attractors.jl/src/tipping/mfs.jl:126:If pertubation with the same basin of attraction is found, it updates the best shock found
Attractors.jl/src/tipping/mfs.jl:128:and returns the best pertubation found.
$ grep -nr pertubations Attractors.jl
Attractors.jl/src/tipping/mfs.jl:49:the algorithm generates random pertubations within the search area and records
Attractors.jl/src/tipping/mfs.jl:53:generates random pertubations on the surface of the hypersphere with radius equal to the
Attractors.jl/src/tipping/mfs.jl:62:- `initial_iterations = 10000`: number of random pertubations to try in the first step of the
Attractors.jl/src/tipping/mfs.jl:125:the radius of the sphere on the surface of which it generates random pertubations.
$ grep -nr sillhoute Attractors.jl
Attractors.jl/src/mapping/grouping/cluster_optimal_ϵ.jl:81:    # vary ϵ to find the best one (which will maximize the mean sillhoute)
Attractors.jl/src/mapping/grouping/cluster_optimal_ϵ.jl:109:    # vary ϵ to find the best radius (which will maximize the mean sillhoute)
Attractors.jl/src/mapping/grouping/cluster_optimal_ϵ.jl:161:    # vary ϵ to find the best one (which will maximize the minimum sillhoute)
$ grep -nr statitics Attractors.jl
Attractors.jl/src/basins/fractality_of_basins.jl:98:        error("Ntotal must be larger than 1000 to gather enough statitics.")
$ grep -nr stystem Attractors.jl
Attractors.jl/src/mapping/attractor_mapping.jl:65:stystem by mapping initial conditions to attractors using `mapper`
$ grep -nr "syste " Attractors.jl
Attractors.jl/docs/src/examples.md:479:# initialize dynamical syste and sampler
$ grep -nr trehsold Attractors.jl
Attractors.jl/test/continuation/matching_attractors.jl:97:            # okay here we test the case that the trehsold becomes too large
$ grep -nr unecessary Attractors.jl
Attractors.jl/src/mapping/grouping/nearest_feature_config.jl:48:# and makes it unecessary to also implement `group_features`. The bulk version
$ 

The use of Continuating/continuate is unusual.

$ grep -nr Continuating Attractors.jl
Attractors.jl/src/continuation/continuation_recurrences.jl:115:        desc="Continuating basins fractions:", enabled=show_progress
$ grep -nr continuate Attractors.jl
Attractors.jl/CHANGELOG.md:76:- New basins fraction continuation method `RecurrencesFindAndMatch` that utilizes a brand new algorithm to continuate basins fractions of arbitrary systems.
$

Creating ics on grid for basins of attraction is super slow for higher dimensions

Hey

I need to calculate basins of attraction for a high-dimensional system (100+), but I noticed that basins_of_attraction is super slow for these cases because of the creation of the initial conditions on a grid. Especifically
A = StateSpaceSet([generate_ic_on_grid(grid, i) for i in vec(I)])

is very slow for higher dimensions (was taking more than an hour to run...).

I think there are two reasons: evaluating the CartesianIndices becomes slow, and also maybe because generate_ic_on_grid allocates StaticVectors.

What would be a good solution for this?

Thanks a lot!

Rename: `continuation` to `global_continuation`

I don't remember well if I have shared the news, but @JonasKoziorek is a GSOC student currently working on a new submodule for DynamicalSystems.jl : https://github.com/JuliaDynamics/PeriodicOrbits.jl . The same project can also serve as a first step for a common interface between BifurcationKit.jl and Attractors.jl.

After spending more time re-reading BifurcationKit.jl, and re-doing the comparison we have done in our 2023 CHAOS paper, I am convinced that it is possible to re-write / interface BifurcationKit.jl to be based on DynamicalSystems.jl and integrate with the ecosystem more, and also share the continuation interface. That is the bright unified future I have planned in my head, but alas it would take too much effort from me to make all the changes in the BifurcationKit.jl code.

In any case, as we know, the continuation in each software is fundamentally different. That is why I propose to rename our continuation to global_continuation. "global" because it views the whole state space (within a specified box). BifurcationKit.jl may perhaps rename its continuation to local_continuation, "local" because it tracks a specific point in the state space (or in the periodic orbit), and perhaps more importantly because it computes a local stability indicator (in particular the Jacobian). I don't know whether @rveltz is interested in renaming the continuation function, but in the bright unified future I have dreamed of having these two different names would be better for the user.

cc @awage @KalelR

Key `0` returned by `AttractrosViaRecurrences`

Describe the bug

If my understanding is correct, the output of basins_fractions or basins_of_attraction with AttractorsViaRecurrences should never contain 0 as an attractor ID. Only -1 and the positive integers. Yet, here is an example where 0 is a key in the found basins!

Minimal Working Example

using Attractors

function dissipative_standard_map_rule(u, p, n)
    x, y = u
    ν, f₀ = p
    s = x + y
    xn = mod2pi(s)
    yn = (1 - ν)*y + f₀*(sin(s))
    return SVector(xn, yn)
end

p0 == 0.02, f0 = 4.0)
u0 = [0.1, 0.1]
ds = DeterministicIteratedMap(dissipative_standard_map_rule, u0, p0)
density = 2000
xg = range(0, 2π; length = density+1)[1:end-1]
yg = range(-ymax, ymax; length = density)
grid = (xg, yg)

mapper_kwargs = (
    mx_chk_att = 3,       # attractors are well-separated in the state space: so this is small
    mx_chk_fnd_att = 200, # attractors are low period orbits, so this is small
    mx_chk_loc_att = 200, # very high multistability means this should be high
    mx_chk_hit_bas = 10,  # for more accuracy, but in any case basins are so riddled
                          # that it is unlikely that this clause will even be hit...
    sparse = false, # we want to compute full basins
    Ttr = 10,
)

mapper = AttractorsViaRecurrences(ds, grid; mapper_kwargs...)

basins, attractors = basins_of_attraction(mapper)
ids = sort!(unique(basins))
142-element Vector{Int32}:
  -1
   0
   1
   2
   3
   4
   5
   6
   7
   8
   ⋮

Package versions

Latest stable.


This issue also highlights that a logging system is useful (discussed in #70 )

Optimize feature selection for clustering (or in general finding attractors)

I have an idea of how we could make a function to optimize feature selection for clustering attractors. I will be creating this function while working on a new research project. But I think it is useful to discuss possibilities of how to make it happen.

My idea is as follows:

  • We have some "function" that returns a lot of features, like 20 or so.
  • From these features we want to find the 3 or 4 ones that are the most efficient into clustering data into separated attractors.
  • We could perform an optimization loop: pick 3 features at random, cluster the data, with these feaures, and the computed a clustering quality metric. Keep the three features that have the highest quality.

The main question therefore is how do we define a "cluster quality metric"? On one hand, this metric needs to have a contribution of cluster separation: the more separated the clusters are, the more clear the clustering. This could be done by computing the pairwise distribution distance of the pdfs fitted to every cluster, and keep as the metric the median of this distribution.

However, this cluster quality metric should also have some short of contribution from the amount of clusters found. The more clusters the better...?

cc also @KalelR I think this is relevant for you too.

Interface redesign proposal: matchers and a generic continuation type

This proposal is motivated by two things:

  1. The currently open PR #126
  2. Some offline discussion I had with Kalel about a new way to mattch attractors, that is fundamentally different from the way we match currently. It is so fundamentally different, that one cannot just adjust the "distance function" in our current match_statespacesets! function so that the same result is achieved.

I want to propose a new interface, that will naturally make Attractors.jl easier to extend both in continuation algorithms, and in matching algorithms.

Matchers

The first step is a complete separation of continuation and matching. In fact, this is already achieved in the source code, but not at a formal interface level.

I propose a new abstract type SSSetMatcher (state space set, a.k.a. attractor, matcher). Subtypes of this interface extend the function

function match_continuation!(attractors_info, matcher::SSSetMatcher)
    # do your thing, store the replacement maps
    return rmaps::Vector{Dict}
end

This function operates given the "attractors" (vector of dictionaries). The extra convenience function

function match_continuation!(continuation_quantity::Vector{Dict}, rmaps::Vector{Dict})
    for (quantity, rmap) in zip(continuation_quantity, rmaps)
        swap_dict_keys!(quantity, rmap)
    end
    return
end

can be applied instantly to the basins fractions curves, or any other measure of attractors that we track, such as those in the example of PR #129 .

This new interface allows us to have the current matcher as one type, and the new matcher that Kalel has designed (but is not yet pushed anywhere online here) as a second matcher.

New generic continuation type

This orthogonality of matchers and mappers means that we can generalize the current RecurrencesFindAndMatch to a more general AttractorsContinueAndMatch. An alternative name could be AttractorsSeedAndMatch or AttractorsSeedContinueMatch. let me know which one you prefer.

Here is its docstring:

AttractorsContinueAndMatch(mapper, matcher [, seeding])

A continuation method for continuation.
mapper is any subtype of AttractorMapper which implements
extract_attractors, while matcher is any subtype of SSSetMatcher.

Description

This global continuation method has a simple basis.
At the first parameter slice of the continuation process, attractors and their fractions
are found using the given mapper.
At each subsequent parameter slice,
new attractors are found by seeding initial conditions from the previously found
attractors and then running these initial conditions through the mapper.
Seeding initial conditions close to previous attractors increases the probability
that if an attractor continues to exist in the new parameter, it is found.
Seeding is controlled by the seeding input. It is a function that given
a state space set (an attractor) it returns an iterator of initial conditions
sampled from the attractor. This generates the seeds.

After the special seeded initial conditions are mapped to attractors,
attractor basin fractions are computed by sampling additional initial conditions
using the provided ics in continuation).
I.e., exactly as in basins_fractions.
Naturally, during this step new attractors may be found, besides those found
using the "seeding from previous attractors".
Once the basins fractions are computed,
the parameter is incremented again and we perform the steps as before.

This process continues for all parameter values. After all parameters are exhausted,
the found attractors (and their fractions) are "matched" to the previous ones.
This means: their IDs are changed, so that attractors that are "similar" to those at a
previous parameter get assigned the same ID.
Matching is done by the provided matcher.
In fact, matching is a rather trivial call to match_continuation!
at the end of the continuation function. This call is almost free.
If you don't like the final matching output,
you may use arbitrarily different matcher in match_continuation!
without having to recompute the whole continuation.
That is also why how matching works is described in the docstrings of each matcher.

and here is its source code:

function continuation(acam::AttractorsContinueAndMatch, prange, pidx, ics;
        samples_per_parameter = 100, show_progress = true,
    )
    progress = ProgressMeter.Progress(length(prange);
        desc = "Continuing attractors and basins:", enabled=show_progress
    )
    mapper = acam.mapper
    reset_mapper!(mapper)
    # first parameter is run in isolation, as it has no prior to seed from
    set_parameter!(referenced_dynamical_system(mapper), pidx, prange[1])
    fs = basins_fractions(mapper, ics; show_progress = false, N = samples_per_parameter)
    # At each parmaeter `p`, a dictionary mapping attractor ID to fraction is created.
    fractions_curves = [fs]
    # The attractors are also stored (and are the primary output)
    prev_attractors = deepcopy(extract_attractors(mapper))
    attractors_info = [prev_attractors]
    ProgressMeter.next!(progress; showvalues = [("previous parameter", prange[1]),])
    # Continue loop over all remaining parameters
    for p in prange[2:end]
        set_parameter!(referenced_dynamical_system(mapper), pidx, p)
        reset_mapper!(mapper)
        # Seed initial conditions from previous attractors
        # Notice that one of the things that happens here is some attractors have
        # really small basins. We find them with the seeding process here, but the
        # subsequent random sampling in `basins_fractions` doesn't. This leads to
        # having keys in `mapper.bsn_nfo.attractors` that do not exist in the computed
        # fractions. The fix is easy: we add the initial conditions mapped from
        # seeding to the fractions using an internal argument.
        seeded_fs = Dict{Int, Int}()
        for att in values(prev_attractors)
            for u0 in acam.seeding(att)
                # We map the initial condition to an attractor, but we don't care
                # about which attractor we go to. This is just so that the internal
                # array of `AttractorsViaRecurrences` registers the attractors
                label = mapper(u0; show_progress = false)
                seeded_fs[label] = get(seeded_fs, label, 0) + 1
            end
        end
        # Now perform basin fractions estimation as normal, utilizing found attractors
        # (the function comes from attractor_mapping.jl)
        fs = basins_fractions(mapper, ics;
            additional_fs = seeded_fs, show_progress = false, N = samples_per_parameter
        )
        # We do not match attractors here; the matching is independent step done at the end
        current_attractors = deepcopy(extract_attractors(mapper))
        push!(fractions_curves, fs)
        push!(attractors_info, current_attractors)
        overwrite_dict!(prev_attractors, current_attractors)
        ProgressMeter.next!(progress; showvalues = [("previous parameter", p),])
    end
    # Match attractors (and basins)
    rmaps = match_continuation!(attractors, matcher)
    match_continuation!(rmaps, fractions_curves)
    return fractions_curves, attractors_info
end

As you can imagine, i have actually already implemented this proposal. Not only it works immediatelly, but infact, it makes PR #126 obsolete. The existing AttractorsViaFeaturizing already directly works with this code, we just need to ensure that extract_attractors throws an error if ics is not a state space set but a generating function.


Thoughts? If you approve I can put in the PR and we can merge this, so that Kalel can go ahead and do the new matching PR.

The continuation function remains exactly as is. So is the current "grouping continuation".

Better progress bars

Taking as an example the RecurrenceSeededContiniation, the whole compitation process can take hours if one requires very high solver accuracy, FSM accuracy, or just a lot of initial conditions. Not only that, but it can take an hour to just go from one parameter to another. The progressbar in this case is effectively completely stuck giving no information on the progress.

I think we need to rework the progress bar to not be updated only when the parameter is updated. There are two ways I can think:

  1. Allow for a subprogress bar to be displayed by the internal basins_fractions calls. This would be much more informative for the user, but would also require a bit of findlking with ProgressMeter.jl to see how it is possible. I think ProgressMEter.jl has an offset keyword for something like that.
  2. Re-work the progress code, so that a progress bar is given to the basins_fractions!. The internal function just calls next! on the progressbar and the overarching progressbar has n*spp total stepping values.

Improve memory-efficiency of clustering algorithm in AttractorsViaFeaturizing

Hey

The current way that the features in AttractorsViaFeaturizing are clustered can use a lot of memory. Just as a reminder, if we want to calculate 2d basins of attraction with side length L, we would

  1. integrate the L^2 initial conditions on the grid
  2. extract their features, leading to num_features = L^2 features
  3. apply the dbscan clustering algorithm, which requires a pairwise distance matrix which is thus of size num_features^2 = L^4. As you see, this explodes even with relatively-course grids - I was already having trouble with L ~ 200. Clustering.jl offers another way to use dbscan which doesn't apparently use the full pairwise matrix, and so should use less memory. I've been trying this and getting some really weird and hard-to-debug memory errors, which I guess stem from an allocation of a large array somewhere.

It would be nice anyway in the future for us to try other clustering methods. But I need these basins quickly for a project, so I've done the following:

  1. reduce (by a lot) the number of features to be clustered by doing a first, rough "clustering". This involves rounding the values in each feature to a certain decimal number, and then getting the unique (rounded) features. This is of course not perfect, as there will be repeated some features that differ slightly, but already reduces the dimensionality from L^2 to ~number of attractors.
  2. apply dbscan to this reduced list of features, obtaining now the unique (clustered) features, that correspond to the attractors.
  3. use these attractors as templates and map all the unrounded features to them using the nearest-neighbors algorithm. This gets us the labels of all the features.
function _cluster_features_into_labels_large(features, config, ϵ_optimal; par_weight::Real = 0, plength::Int = 1, spp::Int = 1)
    ufeats = unique(map(feature->round.(feature, sigdigits=2), features))
    distances = Attractors._distance_matrix(ufeats, config; par_weight, plength, spp)
    dbscanresult = Attractors.dbscan(distances, ϵ_optimal; min_neighbors=config.min_neighbors, metric=nothing)
    cluster_labels_reduced = Attractors.cluster_assignment(dbscanresult)
    ulabels = unique(cluster_labels_reduced)
    templates = ufeats[ulabels]
    
    templates_mat = reduce(hcat, templates)
    kdtree = Attractors.KDTree(templates_mat) #each pt is a column in the mat
    features_mat = reduce(hcat, features)
    cluster_labels, dists = Attractors.knn(kdtree, features_mat, 1)
    cluster_labels = [lab[1] for lab in cluster_labels]
    return cluster_labels
end

It has worked great in the example I tried here: firstly, it runs (haha), uses a lot less memory and is much faster (no need to compute L^4 distances!). I'm not sure how this would behave in general though. Happy to discuss this, and later do a pull request if you think it's a good idea.

PS: sorry for posting so many issues and not solving them, I'm quite busy with a few projects now. Will try to complete the other issues as soon as I can!

Expose `reset!(mapper)` to the user

Necessary steps for this:

  • Make basins_of_attraction or basins_fractions return copies of the stored attractors
  • Ensure reset! resets the ID of latest attractor to 1. It doesn't matter for continuation what ID we set to as we do matching anyways at the end.
  • Add docstring to reset! and export the function and add it to the docs.
  • Define behavior of the function for the featurizing mapper. For proximity mapper it should do nothing.

Implementation of edge tracking algorithm

The edge tracking algorithm is a simple numerical method to find the edge state or (possibly chaotic) saddle on the boundary between two basins of attraction. It is first described by Skufca et al. (2006).

The algorithm has been successfully applied to e.g. the laminar-turbulent boundary in plane Couette flow1 and Melancholia states separating the snowball and warm climate states of an intermediate-complexity climate model2. These examples demonstrate that the method can deal with complicated, high-dimensional and chaotic boundaries. Thus, it may generally be useful for a) finding saddles/edge states and b) tracking the basin boundary in a range of dynamical systems.

As discussed with @Datseris, I am currently working on implementing this here.

Footnotes

  1. Schneider et al. (2008)

  2. Luarini and Bodai (2017)

Add an advanced matching example

The matching functionality offered by RAFM is very powerful but we don't really highlight the possibilities much in the docs. We have this great example of the henon map and period doublings that we also use in the paper. We should make it into an example in the library docs as well.

The example is already in the test file for the RAFM continuation.

Grids of `AttractorViaRecurrences` unecessarily limit the number type to `Float64`

The grids used in AttractorsViaRecurrences convert all numbers to Float64. E.g., we have:

function RegularGrid(grid::NTuple)
    D = length(grid)
    grid_steps = SVector{D,Float64}(step.(grid))
    grid_maxima = SVector{D,Float64}(maximum.(grid))
    grid_minima = SVector{D,Float64}(minimum.(grid))
    return RegularGrid(grid_steps, grid_minima, grid_maxima, grid)
end

This is unecessary. We should simply extract the number type T and use vectors of type T. This can impact performance if different number types are used.

Algorithms to find basin boundary

There is no automated algorithm in DynamicalSystems.jl to explore the boundary between two basins. Here are some ideas from simple to difficult:

  1. Compute the basins on a grid and with some filtering look for cells that have a neighbor from a different basin. This is computationally expensive especially in dimension larger than 3.
  2. Find the attractors with some initial random sampling. With the bisection method find a starting point on the boundary between the basin. Then launch a random search algorithm that explores the boundary. The idea would be to sample a small box of the phase space. If the box is on the boundary then we accept the point and if not it is rejected with some probability. There are method from the Monte Carlo Markov Chain algorithm family that may fit.
  3. Find the saddle on the boundary and iterates backward the dynamical system such that we can explore the stable manifold of the saddle.

There is an enormous amount of works in stochastic search (MCMC, Metropolis-Hasting, hit-and-run ...). And there very nice Julia packages that covers the subject.

I post here a naive attempt with the hit and run algorithm for the Duffing oscillator.

Captura de pantalla_2023-04-25_14-03-43

Smooth boundaries are harder to find.
Captura de pantalla_2023-04-25_14-10-23

Function `_rescale_to_01` is incorrect

Hey

I just looked at the _rescale_to_01 function, which we use to rescale each feature dimension onto the [0,1] interval, and noticed it is clearly incorrect.

Currently, it's

function _rescale_to_01(features::AbstractStateSpaceSet)
    mini, maxi = minmaxima(features)
    return map(f -> f .* (maxi .- mini) .+ mini, features)
end

This does not rescale features to [0,1].

Instead, it should be

function _rescale_to_01(features::AbstractStateSpaceSet)
    mini, maxi = minmaxima(features)
    return map(f -> (f .- mini) ./ (maxi .- mini), features)
end

When I first wrote it, it was correct, I'm pretty sure. Somewhere in the middle of the refactoring we got this wrong. It might very well nott affect results, but we should correct it to do what it should.

I'm writing an issue because I implemented this and the dummy bistable test in grouping_continuation.jl got an error. I didn't have time to understand why this happens.. but I saw that this rescaling of features leads to small floating point errors (e.g. the rescaled features being 0.49999999...2 instead of 0.5). Maybe the matching procedure does not match them because of the errors, thus creating new attractors? Or is the rescaling applied to the global pool (which would rescale the features a certain way) and then also applied to a local pool, for only attractor (whihc would rescale in a different way)?

mapper(u0) not defined for AttractorsViaFeaturizing

In the docstring of AttractorsViaFeaturizing we have:

This `mapper` also allows the syntax `mapper(u0)` but only if the `grouping_config`
is _not_ `GroupViaClustering`.

but the function is not implemented:

   function dumb_map(z, p, n)
        x, y = z
        r = p[1]
        if r < 0.5
            return SVector(0.0, 0.0)
        else
            if x  0
                return SVector(r, r)
            else
                return SVector(-r, -r)
            end
        end
    end

    r = 1.0
    ds = DeterministicIteratedMap(dumb_map, [0., 0.], [r])
    u0s = [1 => [r, r], 2 => [-r, -r]] # template ics

    xg = yg = range(-2.0, 2.0; length=100)
    grid = (xg, yg)
    expected_fs_raw = Dict(1 => 0.5, 1 => 0.5)
    featurizer(A, t) = SVector(A[1][1])

function features_from_u(u)
    A, t = trajectory(ds, 100, u; Ttr = 1000, Δt = 1)
featurizer(A, t)
end
templates = Dict(k => features_from_u(u) for (k, u) in u0s)
config = GroupViaNearestFeature(templates)
mapper = AttractorsViaFeaturizing(ds, featurizer, config; Ttr=500)
mapper([1., 0.])

returns:

ERROR: MethodError: objects of type AttractorsViaFeaturizing{DeterministicIteratedMap{false, SVector{2, Float64}, 2,
typeof(dumb_map), Vector{Float64}}, GroupViaNearestFeature{1, Float64, NearestNeighbors.KDTree{SVector{1, Float64},
Euclidean, Float64, SVector{1, Float64}}, Vector{Int64}}, typeof(featurizer), Int64, 2, Float64} are not callable

I will propose a PR to solve this. It is not crucial but it a convenience function that is helpful.

Automatic characterization of attractors in the recurrences version

AttractorsViaRecurrences finds real attractors. Nevertheless, they are labelled with the integers. DynamicalSystems.jl has a huge infrastructure for analyzing dynamical systems though. It seems possible that we can use library functions such as the fractal dimension, or various entropic quantities from ComplexityMeasures.jl, or periodicity and Lyapunov related stuff from ChaosTools.jl, to characterize attractors automatically. Something like attaching labels such as "chaotic", or "periodic" , or "fixed point". And we also should decide how to further separate co-existing fixed points or chaotic attractors.

Separate the grouping functionality into a new package

It seems to me that the grouping code we have set up here is very very nice. Simple 2-function API and extendable. Probably a good idea to separate the code into an intependent package that is used here as a dependency.

`convergence` output can have 0 or even negative values

After my updates to the convergence output in

fractions, labels, convergence = convergence_and_basins_fractions(mapper, ics; show_progress = false)

I've noticed that simulation results can have 0 or even negative convergence for the recurrences mapper.

Clearly, this is because of errors in the new logic I've implemented that adjusts convergence counts given the state of the finite state machine. I need to fix this, and I need to implement a robust test case for it as well to ensure it is working properly...

AttractorsViaRecurrences `mx_chk_loc_att` and `:att_found` state

Alright, @awage I think we should discuss this. At the moment, the mode of operation :att_found of the Finite State Machine (FSM) is as follows:

    # This state can only be reached from `:att_search`. It means we have
    # enough recurrences to claim we have found an attractor.
    # We then locate the attractor by recording enough cells.
    if bsn_nfo.state == :att_found
        if ic_label == 0 || ic_label == bsn_nfo.visited_cell_label
            # label this cell as part of an attractor
            bsn_nfo.basins[n] = bsn_nfo.current_att_label
            bsn_nfo.consecutive_match = 1
            store_attractor!(bsn_nfo, u)
        elseif iseven(ic_label) && (bsn_nfo.consecutive_match <  mx_chk_loc_att)
            # We make sure we hit the attractor another `mx_chk_loc_att` consecutive times
            # just to be sure that we have the complete attractor
            bsn_nfo.consecutive_match += 1
            store_once_per_cell || store_attractor!(bsn_nfo, u)
        elseif iseven(ic_label) && bsn_nfo.consecutive_match  mx_chk_loc_att
            # We have recorded the attractor with sufficient accuracy.
            # We now set the empty counters for the new attractor.
            cleanup_visited_cells!(bsn_nfo)
            bsn_nfo.visited_cell_label += 2
            bsn_nfo.current_att_label += 2
            reset_basins_counters!(bsn_nfo)
            # We return the label corresponding to the *basin* of the attractor
            return ic_label + 1
        end
        return 0
    end

This means that, once we enter :att_found, we start labelling all encountered cells as "attractor cells". However, the counter is incremented only the second time we encounter the attractor cells. And each time we do not encounter an attractor cell, we reset the counter and start again from scratch.

For a long time now I thought that the mode of operation is a second option: "every subsequent visited cell is labelled an attractor cell, and each time the counter is incremented." That's not what we have.

But my argument is: shouldn't we do the second option...? The way things are right now we are searching for recurrences twice: We first search for recurrences once to trigger the :att_found phase. But then we search for recurrences again during the :att_found phase, as only when a second recurrence occurs, we increment the counter.

So, isn't it better if we do the second option? This way, we only search for recurrences once. Additionally, this way, the user has much better clarity of what value to give to mx_chk_fnd_att. It is the only parameter that relates with recurrences in state space.

Now, if we do go with the second option, it is not clear to me what mx_chk_loc_att should do... My guess is that it simply counts how many steps to take after having found an attractor to label cells as attractor cells.

Anyways, I am wondering, why did we do things this way. We search for recurrences twice. But it doesn't "improve the accuracy" of the algorithm if you think about it, because :att_found is a state one can never escape from.

I am not sure. I hope you know this better and remember things better so you can tell me why the code is the way 1 instead of the way 2...

Unexpected `-1` label in `AttractorsViaFeaturizing`

I've created some piece of code that allows for easy visualization and comparison of attractor finding between frameworks.

The code is here: https://gist.github.com/Datseris/f39147ab8ac68dd1231895f796b0f2dd

It just computes the trajectories for Lorenz84, and featurizes them with four different featurizers (see code). The results are here:

lorenz84_clustering

Colors purple, teal, and white, are valid attractors found by the algorithm. Color red is label -1. I do not understand why there is output -1, because I am not using the thresholded version, I am just using the default version.

cc @KalelR

Why do we store the visited cells...?

cc @awage

Why does BasinInfo contain a field visited_list::Vector{CartesianIndex{D}}? Is this used anywhere? I am reading the source code but I don't understand if it is used anywhere.

THe only point in the source code that .visited_list is populated with cells is here:

    if bsn_nfo.state == :att_search
        if ic_label == 0
            # unlabeled box, label it with current odd label and reset counter
            bsn_nfo.basins[n] = bsn_nfo.visited_cell_label
            push!(bsn_nfo.visited_list, n) # keep track of visited cells
            bsn_nfo.consecutive_match = 1
        elseif ic_label == bsn_nfo.visited_cell_label
            # hit a previously visited box with the current label, possible attractor?
            bsn_nfo.consecutive_match += 1
        end

There are only 2 sections in the source code where .visited_list is called, and they are both the same function call:

relabel_visited_cell!(bsn_nfo, bsn_nfo.visited_cell_label, 0)

which calls the function:

function relabel_visited_cell!(bsn_nfo::BasinsInfo, old_label, new_label)
    while !isempty(bsn_nfo.visited_list)
        ind = pop!(bsn_nfo.visited_list)
        if bsn_nfo.basins[ind] == old_label
            bsn_nfo.basins[ind] = new_label
        end
    end
end

There are two things I notice:

  1. old_label can never be the same as new_label, as the visited cells by definition have label at least 4, while new_label is always 0 given how we call the function.
  2. We do not use the list of visited cells. Like, wow. The way we check if we have entered a visited cell is by checking the cell label that is stored in the basins array.

So, @awage, do we actually need the visited_list ? I don't see how. Seems to me like we are wasting computations by storing, and then removing the visited cells, without actually using them anywhere!

Convenience function for various global stability / resilience measures

@KalelR and myself came across the following paper by Hana Krakovska and co-workers:

https://www.cambridge.org/core/journals/european-journal-of-applied-mathematics/article/resilience-of-dynamical-systems/B277FB38B049FD4DECC2097E7460E4E3?utm_campaign=shareaholic&utm_medium=copy_link&utm_source=bookmark

It provides several quantifiers for global stability (also called non-local stability also called resilience). Here is a table:

image

I believe it would be great if we create a function, e.g., resilience that takes in a AttractorMapper and returns various metrics of nonlocal stability. I think that several of the indicators that are related to parameter variations can be part of a different function, whjose input should also be the parameter to be varied. But honestly, I don't think it is even useful to include indicators based on parameter vairations as they are so specialized... So the rest of the indicators can all nicely be put in a function. Several of those one can get immediatelly with the current version of Attractors.jl with minimal effort!

Rename `tipping_probabilities` to `basin_overlap`

Tipping probabilities is such a missleading function name... A function should be named accoring to what it does / what it gives you. And what this function gives you is super simple: the overlap of some basins of attraction before and after some change. Whether this can be interpreted as a "tipping probability" or not is subjective and up to the using scientist to decide.

We should rename the function objectively and accurately according to what it does.

Grouping and matching attractors

Following is the code I wrote for grouping attractors and matching them across parameters. It contains some duplicate code from the matching algorithm.

I applied successfully it to a population dynamics system, like seen in the figure.
populationdynamics-fractionscontinuation-fig2-method_distance_extinction-unitidx_3-samples_300-mx_chk_fnd_att_9-groupandmatch_false-th_0 5

The code is:

"""
Given the result of the continuation algorithm (`attractors_info` and `fractions_curves`), group the attractors in `attractors_info` according to the configuration established in `groupingconfig` using features defined in `featurizer`. Also, add up the basin fractions of the grouped attractors. The fractions and attractors are updated in `attractors_info` and `fractions_curves`.
Uses duplicate code from the matching algorithm.
"""
function group_and_match_continuation!(groupingconfig, attractors_info, fractions_curves, featurizer; metric = Euclidean(), threshold = Inf)
    #1. group attractors for first parameter value. 
    prev_attractors = attractors_info[1]; prev_fs = fractions_curves[1];
    group_and_relabel!(groupingconfig, prev_attractors, featurizer, prev_fs)

    #2. For subsequent parameters, group attractors and then match them with the previously grouped attractors.
    for (current_attractors, current_fs) in zip(attractors_info[2:end], fractions_curves[2:end])
        group_and_relabel!(groupingconfig, current_attractors, featurizer, current_fs)
        if !isempty(current_attractors) && !isempty(prev_attractors)
            # If there are any attractors,
            # match with previous attractors before storing anything!
            rmap = match_attractor_ids!(
                current_attractors, prev_attractors; metric, threshold
            )
            swap_dict_keys!(current_fs, rmap)
        end
        # Then do the remaining setup for storing and next step
        Attractors.overwrite_dict!(prev_attractors, current_attractors)
    end
    rmap = Attractors.retract_keys_to_consecutive(fractions_curves)
    for (da, df) in zip(attractors_info, fractions_curves)
        swap_dict_keys!(da, rmap)
        swap_dict_keys!(df, rmap)
    end
    nothing
end

"""
Group attractors based on features specified by `featurizer` and then change their labels so that
the grouped attractors have the same label.
"""
function group_and_relabel!(groupingconfig, atts, featurizer, fs=nothing)
    features = [featurizer(A) for (k, A) in atts]
    newlabels = length(features) > 1 ? group_features(features, groupingconfig) : keys(atts)
    rmap = Dict(keys(atts) .=> newlabels)
    swap_dict_keys!(atts, rmap)
    !isnothing(fs) && sum_fractions_keys!(fs, rmap)
    return rmap
end

"""
Given a replacement map that is a dictionary mapping the old keys in `fs` to new keys,
update `fs` to the sum of the values of `fs` with the same key.
"""
function sum_fractions_keys!(fs, rmap)
    newkeys = unique(values(rmap))
    fssum = Dict(newkeys .=> 0.0)
    for oldkey in keys(fs)
        newkey = rmap[oldkey]
        fssum[newkey] += fs[oldkey]
        pop!(fs, oldkey)
    end
    for newkey in newkeys
        fs[newkey] = fssum[newkey]
    end
    return nothing
end

Additionally, there is a small fix needed in the group_features function (line 131 of cluster_config.jl). The line needs to be

    distances = pairwise(config.clust_distance_metric, features) 

Currrently, the order of the arguments is inverted and it is trying to access config.metric which does not exist.

The full code I used to generate the figures is

using DrWatson
@quickactivate
using OrdinaryDiffEq:Vern9
using DynamicalSystemsBase
using Attractors, Random
using CairoMakie
using Colors
include("$(srcdir())/vis/basins_plotting.jl")

monod(r, R, K) = r*R/(K+R)
function μ!(μs, rs, Rs, Ks)
    for i in eachindex(μs)
        mo1 = monod(rs[i], Rs[1], Ks[1,i])
        mo2 = monod(rs[i], Rs[2], Ks[2,i])
        mo3 = monod(rs[i], Rs[3], Ks[3,i])
        μs[i] = min(mo1, mo2, mo3)
    end
    nothing
end
#not the most optimied but w/e
function Rcoup!(Rcoups, Ns, Rs, μs, cs)
    fill!(Rcoups, 0.0)
    for j in eachindex(Rcoups)
        for i in eachindex(μs)
            Rcoups[j] += cs[j,i] * μs[i] * Ns[i]
        end
    end
    nothing
end

function competition!(du, u, p, t)
    @unpack rs, Ks, ms, Ss, cs, μs, Rcoups, D = p
    n = size(Ks, 2)
    Ns = view(u, 1:n)
    Rs = view(u, n+1:n+3)
    dNs = view(du, 1:n)
    dRs = view(du, n+1:n+3)
    μ!(μs, rs, Rs, Ks)
    Rcoup!(Rcoups, Ns, Rs, μs, cs)
    @. dNs = Ns * (μs - ms)
    @. dRs = D*(Ss - Rs) - Rcoups
    nothing
end

mutable struct CompetitionDynamics
    rs :: Vector{Float64}
    ms :: Vector{Float64}
    Ss :: Vector{Float64}
    μs :: Vector{Float64}
    Rcoups :: Vector{Float64}
    Ks :: Matrix{Float64}
    cs :: Matrix{Float64}
    D :: Float64
end

function CompetitionDynamics(fig="1")
    if fig == "4" || fig == "1"
        Ks  = [
            0.20 0.05 0.50 0.05 0.50 0.03 0.51 0.51;
            0.15 0.06 0.05 0.50 0.30 0.18 0.04 0.31;
            0.15 0.50 0.30 0.06 0.05 0.18 0.31 0.04;
        ]

        cs = [
            0.20 0.10 0.10 0.10 0.10 0.22 0.10 0.10;
            0.10 0.20 0.10 0.10 0.20 0.10 0.22 0.10;
            0.10 0.10 0.20 0.20 0.10 0.10 0.10 0.22;
        ]
        if fig == "1"
            Ks = Ks[:, 1:5]
            cs = cs[:, 1:5]
        end
    elseif fig == "2" || fig == "3"
        Ks = [
            0.20 0.05 1.00 0.05 1.20;
            0.25 0.10 0.05 1.00 0.40;
            0.15 0.95 0.35 0.10 0.05;
        ]

        cs = [
            0.20 0.10 0.10 0.10 0.10;
            0.10 0.20 0.10 0.10 0.20;
            0.10 0.10 0.20 0.20 0.10;
        ]

    else
        @error "nope"
    end

    N = size(Ks, 2)

    rs = [1.0 for i=1:N]
    D = 0.25
    ms = [D for i=1:N]
    Ss = [10.0 for j=1:3]
    μs = zeros(Float64, N)
    Rcoups = zeros(Float64, 3)
    return CompetitionDynamics(rs, ms, Ss, μs, Rcoups, Ks, cs, D)
end


"""
Given the result of the continuation algorithm (`attractors_info` and `fractions_curves`), group the attractors in `attractors_info` according to the configuration established in `groupingconfig` using features defined in `featurizer`. Also, add up the basin fractions of the grouped attractors. The fractions and attractors are updated in `attractors_info` and `fractions_curves`.
Uses duplicate code from the matching algorithm.
"""
function group_and_match_continuation!(groupingconfig, attractors_info, fractions_curves, featurizer; metric = Euclidean(), threshold = Inf)
    #1. group attractors for first parameter value.
    prev_attractors = attractors_info[1]; prev_fs = fractions_curves[1];
    group_and_relabel!(groupingconfig, prev_attractors, featurizer, prev_fs)

    #2. For subsequent parameters, group attractors and then match them with the previously grouped attractors.
    for (current_attractors, current_fs) in zip(attractors_info[2:end], fractions_curves[2:end])
        group_and_relabel!(groupingconfig, current_attractors, featurizer, current_fs)
        if !isempty(current_attractors) && !isempty(prev_attractors)
            # If there are any attractors,
            # match with previous attractors before storing anything!
            rmap = match_attractor_ids!(
                current_attractors, prev_attractors; metric, threshold
            )
            swap_dict_keys!(current_fs, rmap)
        end
        # Then do the remaining setup for storing and next step
        Attractors.overwrite_dict!(prev_attractors, current_attractors)
    end
    rmap = Attractors.retract_keys_to_consecutive(fractions_curves)
    for (da, df) in zip(attractors_info, fractions_curves)
        swap_dict_keys!(da, rmap)
        swap_dict_keys!(df, rmap)
    end
    nothing
end

"""
Group attractors based on features specified by `featurizer` and then change their labels so that
the grouped attractors have the same label.
"""
function group_and_relabel!(groupingconfig, atts, featurizer, fs=nothing)
    features = [featurizer(A) for (k, A) in atts]
    newlabels = length(features) > 1 ? group_features(features, groupingconfig) : keys(atts)
    rmap = Dict(keys(atts) .=> newlabels)
    swap_dict_keys!(atts, rmap)
    !isnothing(fs) && sum_fractions_keys!(fs, rmap)
    return rmap
end

"""
Given a replacement map that is a dictionary mapping the old keys in `fs` to new keys,
update `fs` to the sum of the values of `fs` with the same key.
"""
function sum_fractions_keys!(fs, rmap)
    newkeys = unique(values(rmap))
    fssum = Dict(newkeys .=> 0.0)
    for oldkey in keys(fs)
        newkey = rmap[oldkey]
        fssum[newkey] += fs[oldkey]
        pop!(fs, oldkey)
    end
    for newkey in newkeys
        fs[newkey] = fssum[newkey]
    end
    return nothing
end

reduced_grid(grid, newlength) = map(g -> range(minimum(g), maximum(g); length = newlength), grid)

figidx = "2"
p = CompetitionDynamics(figidx)

N = size(p.Ks, 2)
u0 = [[0.1 for i=1:N]; [S for S in p.Ss]]
ds = ContinuousDynamicalSystem(competition!, u0, p, (J, z, p, t)->nothing)
diffeq = (alg = Vern9(), maxiters=Inf);
int = integrator(ds, u0; diffeq)


function _default_seeding_process_deterministic(attractor::AbstractDataset)
    max_possible_seeds = 10
    seeds = round(Int, log(10, length(attractor)))
    seeds = clamp(seeds, 1, max_possible_seeds)
    return (attractor.data[i] for i in 1:seeds)
end

isextinct(A, idx) = all(A[:, idx] .<= 1e-2)
function get_metric(unitidx=nothing)
    if isnothing(unitidx)
    return "euclidean", Euclidean();
    else
        distance_extinction = function(A,B, idx)
            A_extinct = isextinct(A,idx)
            B_extinct = isextinct(B,idx)
            return (A_extinct == B_extinct) ? 0 : 1
        end
        return "distance_extinction", (A, B) -> distance_extinction(A, B, unitidx);
    end
    nothing
end


pidx = :D; ps = 0.2:0.005:0.3;
xg = range(0, 60,length = 300); grid = ntuple(x->xg, N+3);
unitidxs = [3]
samples_per_parameter = 300
mx_chk_fnd_att = 9
groupandmatch_bools = [false, true]; groupandmatch_savetogether = true
threshold = 0.5
for unitidx in unitidxs
    info_extraction = A -> isextinct(A, unitidx)
    metricname, metric = get_metric(unitidx)
    ds = ContinuousDynamicalSystem(competition!, u0, p, (J, z, p, t)->nothing);
    mapper = AttractorsViaRecurrences(ds, grid;
            Δt= 1.0,
            mx_chk_fnd_att,
            diffeq,
        );
    continuation = RecurrencesSeedingContinuation(mapper; seeds_from_attractor=_default_seeding_process_deterministic, metric, threshold, info_extraction);
    # continuation = RecurrencesSeedingContinuation(mapper; metric, threshold, info_extraction);
    sampler, = statespace_sampler(Random.MersenneTwister(1234);
        min_bounds = minimum.(grid), max_bounds = maximum.(grid)
    );
    fractions_curves_og, attractors_info_og = basins_fractions_continuation(
        continuation, ps, pidx, sampler;
        show_progress = true, samples_per_parameter
    );
    fig = nothing
    for (idx, groupandmatch) in enumerate(groupandmatch_bools)
        fractions_curves = deepcopy(fractions_curves_og)
        attractors_info = deepcopy(attractors_info_og)
        if groupandmatch
            featurizer(A) = [Int32(A)]
            groupingconfig = GroupViaClustering(; min_neighbors=1, optimal_radius_method=0.5) #note that minneighbors = 1 is crucial for grouping single attractors
            metric = function distance_function_bool(A,B)
                return abs(A - B)
            end
            group_and_match_continuation!(groupingconfig, attractors_info, fractions_curves, featurizer; metric=distance_function_bool)
        end

        #plot details
        ukeys = unique_keys(attractors_info)
        label_extincts = map(atts->[k for (k,v) in atts if v == 1], attractors_info); label_extincts = unique(vcat(label_extincts...))
        label_surviving = [key for key in ukeys if key  label_extincts]
        legend_labels = [label in label_extincts ? "extinct" : "surviving" for label in unique_keys(attractors_info)]
        colors_surviving = length(label_surviving) == 1 ? [colorant"green"] : collect(range(colorant"darkolivegreen2", stop=colorant"green", length=length(label_surviving)))
        colors_extinct   = length(label_extincts) == 1 ? [colorant"red"] : collect(range(colorant"red", stop=colorant"red4", length=length(label_extincts)))
        colors_bands = [colorant"white" for _ in unique_keys(attractors_info)]
        colors_bands = merge(Dict(label_surviving .=> colors_surviving), Dict(label_extincts .=> colors_extinct))
        #plot
        idxrow = groupandmatch_savetogether ? idx : 1
        fig, ax = basins_fractions_plot(fractions_curves, collect(ps); idxrow, fig, add_legend=true, legend_labels, colors_bands);
        ax.xlabel = "D";
        ax.ylabel = "fractions";
        ax.title = ["not grouped", "grouped"][idxrow]
        vlines!(ax, 0.25, color=:black);
        !groupandmatch_savetogether && save("$(plotsdir())/defaultseeding-populationdynamics-fractionscontinuation-fig$figidx-method_$metricname-unitidx_$(unitidx)-samples_$(samples_per_parameter)-mx_chk_fnd_att_$(mx_chk_fnd_att)-groupandmatch_both-th_$(threshold).png", fig, px_per_unit=3)
        # fig
    end
    groupandmatch_savetogether && save("$(plotsdir())/defaultseeding-populationdynamics-fractionscontinuation-fig$figidx-method_$metricname-unitidx_$(unitidx)-samples_$(samples_per_parameter)-mx_chk_fnd_att_$(mx_chk_fnd_att)-groupandmatch_$groupandmatch-th_$(threshold).png", fig, px_per_unit=3)
end


Provide "rematch" function

After we are finished with our recurrence-based continuation, we have matched attractors. However, it may be that different matching distance, or threshold, would provide better continuation curves.

Thankfully, all the information necessary to "re-match" attractors is stored in the output, and hence one doesn't need to call the continuation function again. Only the match_attractor_ids!. We can provide a convenience function rematch!(basin_fractions, attractors_info; distance, threshold) that does the 3-4 function calls necessary to do the rematch.

`AttractorsViaProximity` returns inconsistent attractor labels for the same state

Describe the bug
I observe unexpected behavior from AttractorsViaProximity. The corresponding mapper() function does not return a unique label for a given state when being called repeatedly.

More precisely:

  • I define mapper = AttractorsViaProximity(...)
  • For a point x, mapper(x) returns 2
  • I call mapper(y) for a different state y
  • I call mapper(x) again; now it returns 1

Minimal Working Example

using DynamicalSystems
using DynamicalSystemsBase
using StaticArrays, LinearAlgebra

# My 2D dynamical system
function fhn(u,p,t)
    x, y = u

    dx = -x^3 + x - y
    dy = -3*y + x

    SVector{2}([dx, dy])
end

ds = ContinuousDynamicalSystem(fhn, [1.,1.], nothing)

# Get the two stable fixed points
box = (-2..2) × (-1..1)
fp, eigs, stable = DynamicalSystems.fixedpoints(ds, box)
attr = Dict(i => Dataset([fp[stable][i]]) for i in 1:2)

mapper = AttractorsViaProximity(ds, attr, 0.05; Δt=0.01, mx_chk_lost=10000)

# Define a state p1 and print its attractor label
p1 = [0.2701377011835575, 1.0]
println("Mapper label of p1: $(mapper(p1))")

# Call the mapper function on a different state
call = mapper(ones(2))

# Reprint the label of p1
println("Mapper label of p1: $(mapper(p1))")

The above code outputs the following:

Mapper label of p1: 2
Mapper label of p1: 1

However, I would expect that "Mapper label of p1" returns the same label every time, since neither the system ds nor the state p1 have been changed. Note that the point p1 lies close to the basin boundary separating the two basins of attraction, but the system is not chaotic, so I don't understand how the same initial condition can land in different attractors.

Package versions

[61744808] DynamicalSystems v2.3.0
[6e36e845] DynamicalSystemsBase v2.9.1
[90137ffa] StaticArrays v1.5.10
[37e2e46d] LinearAlgebra

Parallel version of AttractorsViaRecurrences

One of the main drawback of AttractorsViaRecurrences to map initial conditions to attractors is that it is sequential. The mapper takes one initial condition at a time and map it to an attractor.

I have an idea to get a partial response to the problem for systems in high dimension. When computing basins fractions or basins of attractions, a large number of samples is necessary. So the idea is the following:

  • The system caches N trajectory computed in parallel and integrates from Ttr to Tmax.
  • The mapper process each trajectory sequentially using the cached points until it finds the attractor or the basin. If the mapper reaches Tmax and needs more points, an integrator takes on and feeds the mapper with new points.
  • Once all the trajectories in the cache have been processed, we start the process again.

Benefit: Speed!
Drawback: We compute all the trajectory but the mapper may discard a lots of computed points if the trajectory reach an attractor before Tmax.

I think it would be useful for very large system. I don't think it will be too much additional code, maybe two or three new functions.

Potential issue in extending `tipping_probabilities` to attraction basins computed on non-uniform grid

@Datseris @awage , there is a potential issue with how tipping_probabilities are computed at the moment. The result is correct if the grid steps are unitary in all directions. If it is not, then the computation of the basin measure must consider the true grid size. Here is my temporary workaround:

function my_tipping_probabilities(basins_before, basins_after, grid)
    @assert size(basins_before) == size(basins_after)

    bid, aid = unique.((basins_before, basins_after))
    P = zeros(length(bid), length(aid))
    # Make -1 last entry in bid, aid, if it exists
    put_minus_1_at_end!(bid); put_minus_1_at_end!(aid)

    # Notice: the following loops could be optimized with smarter boolean operations,
    # however they are so fast that everything should be done within milliseconds even
    # on a potato
    for (i, ι) in enumerate(bid)
        B_i = findall(isequal(ι), basins_before)
        μ_B_i = 0
        for ind in B_i
            # μ = measure on non-unitary grid (in my specific case grid is from ranges/vectors of a state space of 4 variables)
            ## Maybe there is a way to directly sum from the vectors extracted from CartesianIndexes -- in python this would be immediate: but I do not know how to do it in Julia for now...
            μ_B_i += grid[1][ind[1]]*grid[2][ind[2]]*grid[2][ind[3]]*grid[4][ind[4]]
        end
#         μ_B_i = length(B_i) # μ = measure ## This was the original code instead
        for (j, ξ) in enumerate(aid)
           ## Directly compute the overlap
            overlap = findall(x->x.>0,(basins_pre.==ξ).&(basins_pst.==ξ))
            μ_overlap = 0
            for ind in overlap
                μ_overlap += grid[1][ind[1]]*grid[2][ind[2]]*grid[2][ind[3]]*grid[4][ind[4]]
            end           
            P[i, j] = μ_overlap/μ_B_i
        end
    end
    return P
end

function put_minus_1_at_end!(bid)
    if -1  bid
        sort!(bid)
        popfirst!(bid)
        push!(bid, -1)
    end
end

Adaptive grid for finding attractors with recurrences

Is your feature request related to a problem? Please describe.
In many dynamical systems, the timescale of movement in the statespace depends strongly on the location on state space. For our recurrences algorithm this may lead to problems like JuliaDynamics/ChaosTools.jl#258 .

Describe the solution you'd like
Ultimately, we would like to have a way to alter the grid so that it is more fine in regions where the motion is slow. The simplest way to do this is to allow users to pass in arbitrary locations for grid points, instead of a uniform range.

Describe alternatives you've considered
A more advanced version would be that we provide a function that can create such a grid by computing mean vector field and then deciding a best grid. We are guessing that people that work in finite element methods know how to do such techniques and maybe in the future we can learn from them. But for now the above suggestion is already something to start off.

Methods to help in finding good parameters for the recurrences method

In lots of cases, AttractorsViaRecurrences works great with the default parameters. But this isn't true all the time, especially for more complicated systems. Then, we need to fine-tune at least a bit the meta-parameters of the code - do we change the grid length? the grid range? the other parameters?
It would be great if there was some method to help in identifying which meta-parameters can be improved for more accuracy or speed, or at least to know which meta-parameters aren't good.

A simple example of this is for cases in which attractors aren't found, and the method returns `-1'. Currently, this can mean that:

  1. The trajectory spent more than mx_chk_lost steps out of the grid or
  2. The trajectory didn't converge to an attractor after mx_chk_safety steps or
  3. The norm of the integrator becomes larger than horizon_limit

Lifting the degeneracy from these by changing the codes for these from -1 to -2 and -3 or something would already help. Also, before returning -X we could throw a warning. Similarly, would a debugging mode be good? This could throw info or 'warning messages when each attractor is found saying why it was found.

QoL: automatic limits in attractor continuation animation

Our function animate_attractors_continuation is incredible. However, it requires explicitly inputted limits as the limits do not change during the video production. It makes sense that the limits do not change, but we can improve the default limits from 0-1. We can loop over all attractors found, for all of them estimate their minimum and maximum, and use the minima and maxima of all minima and maxima as the limits.

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Minimal Fatal Shock search algorithm

In the paper: https://www.nature.com/articles/s41598-020-68805-6

Minimal fatal shocks in multistable complex networks, Halekotte & Feudel, 2020

The authors discuss a "shock-induced-tipping" and quantifying its resilience via the Minimal Fatal Shock.

The main gist of the idea is in Fig. 1 of the article.

I think it is possible, and perhaps even simple, to implement an algorithm that computes this for a generic dynamical system in a generic manner.

Inputs should be dynamical system and initial condition u_0 and a function that quantifies the change of attractor (or we use automatic search in basins? I'll need to read the paper in more detail to say more about this function. It is the color coding of the figure below)

Basins of Attraction of irregularly spaced grid

How can I do a search/characterization of the basins of attraction on an irregular grid? It is, in fact, not an uncommon situation to have basins of attraction that are robust against some state variable perturbations rather than others. So it is convenient to have a search on, say, N points on variables that are known to have little impact on trajectories' ending points within a reasonable interval, but on 2xN or more points on those that are critical, and show high variability.

Ideally say, we would like to have:

    using DynamicalSystems, OrdinaryDiffEq
  
    ds = ContinuousDynamicalSystem(my_model, u0, p)
    
    ## Say "my_model" has a state space of three variables 
    
    x1 = range(0,10,length=N_1)
    x2 = range(0,70,length=N_2)
    x3 = range(0.01,5,length=N_3)
    grid = (x1,x2,x3)     


    # # Calculate basins of attraction (old)
    reltol = 1e-6
    abstol = 1e-6
    diffeq = (alg = Vern9(), reltol = reltol, abstol = abstol)
    attractors = ... # Some assignment from known attractors of my_model 
    mapper = AttractorsViaProximity(mevmo, attractors; diffeq=diffeq)
    basins, = basins_of_attraction(mapper,grid)

When I tried to run something similar to this I get in IJulia:

MethodError: no method matching generate_ic_on_grid(::Tuple{Matrix{Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, ::CartesianIndex{4})

Closest candidates are:
generate_ic_on_grid(::Tuple{Vararg{T, B}}, ::Any) where {B, T} at ~/.julia/packages/ChaosTools/PHPDF/src/basins/attractor_mapping.jl:145

heatmap_basins_attractors doesn't work for Henon map

The following codes:

using CairoMakie
using Attractors, StaticArrays
function henon_rule(u, p, n) 
    x, y = u
    a, b = p
    xn = 1.0 - a*x^2 + y
    yn = b*x
    return SVector(xn, yn)
end
u0 = [0.2, 0.3];
p0 = [1.2, 0.3];
henon = DeterministicIteratedMap(henon_rule, u0, p0);
xg = range(-2.5, 2.5; length = 1500)
yg = range(-2.5, 2.5; length = 1500)
grid=(xg,yg)
mapper = AttractorsViaRecurrences(henon, grid; sparse = false)
basins, attractors = basins_of_attraction(mapper; show_progress = false)
heatmap_basins_attractors(grid, basins, attractors)

give a picture like this:
plot_1
Just the attractor, no basins. I am using Julia 1.9.

User-specified threshold / radius

I have some attractors. I have a feature of these attractors called "R". I want to cluster attractors by R, but with "threshold" 0.1, whatever this means. In principle I would like to separate attractors into clusters separated by distance 0.1.

To my understanding this could be done two ways in ClusteringConfig (please confirm if this is true)

  1. initialize 0.1-spaced features and use them as templates. Then, choose clustering_threshold = 0.1. (?)
  2. Use unsupervised method and specify the optimal radius to be 0.1. (???)

Number 2 isn't possible in current API. We should allow the user to provide a real number in optimal_radius_method, and then the search for optimal radius is skipped entirely and the provided value is used instead.

Would Number 1 as described do what I want?

P.s.: A bit off-topic but relevant. This is confusing:

    clustering_threshold = 0.0: Maximum allowed
    distance between a feature and the cluster center
    for it to be considered inside the cluster. Only        
    used when clust_method = "kNN_thresholded".

If the maximum allowed distance is by default 0, each point would become its own cluster, right? Because, by definition, each point will have more than allowed distance to be considered inside the cluster. In general the doc for this keyword is confusing and needs to be improved. cc @KalelR Did you mean to make the deafult value for this Inf ?

Octopus like basins

I've just read Basins with Tentacles by Yuanzhao Zhang and Steven H. Strogatz:

image

The authors say that a basin is octopuslike if, starting from the attrractor, a random perturbation in a given direction exits and re-enters the basin later when increasing the strength of the perturbation, and this continues many times.

I'm opening a feature request here, mainly because this is tightly linked with the minimal fatal shock algorithm #4 . We can kill two birds with one stone and reuse the same code.

In essense, we write a code that produces perturbations along directions. This code produces the result of a boolean matrix of dimensions as much as teh state space (1 = radious = perutrbation strenght, the other dimensions are the anglers).

By careful analysis of this array we can find both (1) the minimal perturbation along a direction, and (2) whether there is octopus like structure in this direction. The minimal fatal shock #4 is the minimum of (1) and the basin is octopuslike if all directions are octopuslike from (2).

AttractorViaRecurrencies not working

I have added the Attractors.jl package to my Julia 1.8.5. But there are some issues with running it.
The code provided on the online documentation does not work. It produces the error that basins_of_attraction(mapper) is incompatible with the sparse version of AttractorViaRecurrence. On the online docs, however, it is unclear what method should be used instead of basins_of_attraction.

The full code:

using Attractors
using PredefinedDynamicalSystems
ds = PredefinedDynamicalSystems.thomas_cyclical(b = 0.1665)
# This computation takes about an hour
xg = yg = zg = range(-6.0, 6.0; length = 251)
mapper = AttractorsViaRecurrences(ds, (xg, yg, zg))
basins, attractors = basins_of_attraction(mapper)
attractors

The error message:
ArgumentError: Sparse version of AttractorsViaRecurrences is incompatible with
basins_of_attraction(mapper).

Package info:

Status ~/.julia/environments/v1.8/Manifest.toml
[f3fd9213] Attractors v1.5.1
[61744808] DynamicalSystems v3.0.0

tipping_probabilities: Kills Kernel when basin array is large

Describe the bug
Upon invoking prob = tipping_probabilities(basins_before,basins_after), Julia kernel dies. I did some testing. Could it depend on the array size? If the array size of the provided basins before/after is large, could it cause Julia Kernel to die (possibly by memory exhaustion)? I am asking since I am new to Julia and do not know how to debug any "Killed" kernel message. In my case, basins_before and basins_after are 4-D arrays of 100x100x100x100 elements, reaching up to 200 MB, but just with three deterministic attractors. The formula for the computation of the tipping probabilities is straightforward. tried with much smaller arrays and `tipping_probabilities' works. I am not sure what else could cause the killing.

Package versions
Status ~/.julia/environments/v1.8/Manifest.toml
[608a59af] ChaosTools v2.9.0
⌅ [5732040d] DelayEmbeddings v2.4.1
[61744808] DynamicalSystems v2.3.2
⌅ [6e36e845] DynamicalSystemsBase v2.8.0
⌅ [ed8fcbec] Entropies v1.1.2
[639c3291] RecurrenceAnalysis v1.8.1
Info Packages marked with ⌅ have new versions available but compatibility constraints restrict them from upgrading. To see why use status --outdated -m

Restrict usage of Poincare map to have grid entirely only on the plane

At the moment the source code of our AttractorsViaRecurrences is unecessarily complicated because it tries to make special case stuff for the poincare map. We must restrict the usage of the poincare map in the recurrences code so that the poincare map is only defined on the hyperplane. This also means that we need to adjust what dimension returns for hte Poincare map. It must return dimension(dynamical_system) - 1, as is should have been doing since the dawn of time practically.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.