Giter VIP home page Giter VIP logo

ethanjameslew / autokoopman Goto Github PK

View Code? Open in Web Editor NEW
42.0 5.0 9.0 38.87 MB

AutoKoopman - automated Koopman operator methods for data-driven dynamical systems analysis and control.

License: GNU General Public License v3.0

Python 100.00%
data-driven-dynamics dynamic-mode-decomposition dynamical-systems koopman koopman-operators reachability sindy autoencoders deep-learning system-identification

autokoopman's Introduction

Hi there! πŸ‘‹ I'm Ethan Lew.

I'm a Cyber-physical Systems Research Engineer with a passion for building safe autonomous systems using formal methods and AI. Currently, I work at Galois, where we're innovating in the verification of high assurance cyber-physical systems operating in unstructured and uncertain environments.

πŸ”¬ My Interests: Signals, Noise, and Trustworthy Systems

  • I find the interplay between signals and noise fascinating, especially when it comes to developing reliable systems in the face of uncertainty.
  • I thrive in multi-disciplinary teams, collaborating with experts from various domains to deliver end-to-end solutions that make a real impact.

πŸ‘¨β€πŸ’» What You'll Find in My GitHub:

  • Projects centered around building safe and autonomous vehicles using formal methods and AI.
  • Approaches and methodologies for verifying high assurance cyber-physical systems, particularly in unstructured and uncertain environments.
  • Exploring Rust as a safe and fast language, especially with an eye for verification.

EthanJamesLew's GitHub stats

autokoopman's People

Contributors

ethanjameslew avatar kpotomkin avatar tumcps avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

autokoopman's Issues

Improvements for Continuous Koopman

From meeting 02/23

  • Reset at every step
  • Simulate using just the $A$ and $B$ matrices
  • Adaptive step

Look into the accuracy here--the fit is suspect.

This is needed for the reachability work.

CC @Abdu-Hekal

Support for trigonometric functions

The spring benchmark has cosine and sine functions in its differential equations. Do SymbolicContinuousSystem currently support this? If not, could we add support for these functions?

ODEs with Constraints

I have been going through the benchmarks from the original Arch competition, and some of them have constraints that are not implemented. For example: β€œPrde20” has a constraint that: x(t) + y(t) + z(t) = 10 for all t.
Does Autokoopman currently have support for implementing these constraints for continuous systems? If not, could we add some support for constraints on dynamics?

Specifiying hyperparameters in autokoopman function

I think we should use the same method for specifying values and ranges for all hyperparameters (n_obs, rank, lengthscale, enc_dim, n_layers). Currently this is in my opinion quite inconsistent since for n_obs one for example specifies the maximum number tried, while for rank one has to specify the range and gets an error when one tries to use a fixed rank. I would propose that for all hyperparameters one can either specify a single value, a range (start, stop), or a range with step size (start, stop, step).

Add pyTorch to requirements?

Should we add pyTorch to the requirements in setup.py so that it is automatically installed when installing the library?

Demo Notebooks

Summary
Make a collection of jupyter notebooks to demonstrate AutoKoopman. Refer to README for direction.

Default value for t_eval for solve_ivp

I think it would be nice that if t_eval is not explicitly specified by the user, it is automatically computed by the time-step size of the model and the size of the input vector. Currently an error message is outputted if t_eval is not provided, which is a little bit annoying in my opinion.

Automatic normalization of training data

There might be unexpected bias in predication error when training data is not normalized. If the input data, for example, consists of large positions and small angles, then the error from regression will focus much more on getting the position right instead of the angle. If you normalize all the training data between [0, 1] within the ranges of the data you'd like get more balanced prediction errors.

It would be nice if the library could automatically do something like this. Either using absolute bounds on all the dimensions or computing a mean and standard deviation and using the standard deviation for normalization.

Matlab FFI Wrapper?

Plane a workflow to use autokoopman with Matlab. The FFI might be too cumbersome for an average user (?)

Progress bar stops before 100%

The progress bar stops at 98% or 97% percent, but never reaches 100%, even if the code is already finished, which might be confusing for the user. Maybe we can hardcode to set the progress bar to 100% at the end of the computations.

Error when running bayesian optimizer for black-box model

Error trace:

Tuning BayesianOptTuner: 0%| | 0/200 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/Users/b6062805/Documents/Koopman/AutoKoopman/benchmarks/blackBox.py", line 106, in
mse, perc_error = test_trajectories(true_trajectories, model, tspan)
File "/Users/b6062805/Documents/Koopman/AutoKoopman/benchmarks/blackBox.py", line 43, in test_trajectories
trajectory = model.solve_ivp(
File "/Users/b6062805/Documents/Koopman/AutoKoopman/autokoopman/core/system.py", line 223, in solve_ivp
states = np.zeros((len(times), len(self.names)))
TypeError: object of type 'NoneType' has no len()
Tuning BayesianOptTuner: 2%|β–Ž | 5/200 [00:00<00:36, 5.29it/s]

Bug: Older AutoKoopman is More Accurate

I have been trying to run the Koopman falsification code on a different machine and have noticed quite different results. The main branch is giving different results than the version I have (which I installed straight after the fix you provided regarding the time step). I am attaching two figures highlighting this, with the result after one iteration learning the Vanderpool model. The old version learns a very accurate representation whereas the current main does not. I haven’t been able to locate the source of error yet so figured it was best to flag it with you.

image

image

Error in enhancement linear extraction

Linear extraction enhancement causes an error for rff observables for benchmarks with inputs. Running the F1tenthCar benchmark (benchmark with input) results in the following error:
ValueError: could not broadcast input array from shape (204,) into shape (4,)

Unable to Run Benchmarks

Summary

I would like some help running the benchmarks on my side so I can troubleshoot issues. I am particularly confused by

from glop import Glop

as I cannot find any library that has such an object (I think it's a linear program solver?)

@Abdu-Hekal may you please provide some instructions here?

Hyperparameter Tuner

Summary
Implement hyperparameter tuning objects, using sampling approaches like

  • Monte Carlo
  • BOpt
  • Grid Search

Allow users to direct a bootstrapping loop, like

  • k-folds validation
  • LOO

Users should be able to add their own scoring functions as well.

Error when running bayesian optimizer for real data

Error trace:

File "/Users/b6062805/Documents/Koopman/AutoKoopman/benchmarks/realDataAll.py", line 175, in
perc_error, mse = compute_error(model, test_data)
File "/Users/b6062805/Documents/Koopman/AutoKoopman/benchmarks/realDataAll.py", line 103, in compute_error
trajectory = model.solve_ivp(
File "/Users/b6062805/Documents/Koopman/AutoKoopman/autokoopman/core/system.py", line 264, in solve_ivp
states[idx + 1] = self.step(
File "/Users/b6062805/Documents/Koopman/AutoKoopman/autokoopman/core/system.py", line 344, in step
return self._step_func(time, state, sinput)
File "/Users/b6062805/Documents/Koopman/AutoKoopman/autokoopman/estimator/koopman.py", line 78, in step_func
obs = (self.obs(x).flatten())[np.newaxis, :]
File "/Users/b6062805/Documents/Koopman/AutoKoopman/autokoopman/core/observables.py", line 32, in call
return self.obs_fcn(X)
File "/Users/b6062805/Documents/Koopman/AutoKoopman/autokoopman/core/observables.py", line 165, in obs_fcn
return self.poly.transform(np.atleast_2d(X))
File "/usr/local/lib/python3.9/site-packages/sklearn/preprocessing/_polynomial.py", line 369, in transform
X = self._validate_data(
File "/usr/local/lib/python3.9/site-packages/sklearn/base.py", line 566, in _validate_data
X = check_array(X, **check_params)
File "/usr/local/lib/python3.9/site-packages/sklearn/utils/validation.py", line 800, in check_array
_assert_all_finite(array, allow_nan=force_all_finite == "allow-nan")
File "/usr/local/lib/python3.9/site-packages/sklearn/utils/validation.py", line 114, in _assert_all_finite
raise ValueError(
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
Tuning BayesianOptTuner: 5%|β–Œ | 10/200 [00:03<00:57, 3.28it/s]
Tuning BayesianOptTuner: 2%|β–Ž | 5/200 [00:03<02:09, 1.50it/s]

Create Object Hierarchy

Summary

Define the interfaces of the AutoKoopman library. Refer to the structure made in the README.md to determine the functionality requirements.

Error when running Robot benchmark with Koopman

@KochdumperNiklas I get this error when running the robot benchmark (for any observable type)
Error trace:
Tuning GridSearchTuner: 0%| | 0/5 [00:00<?, ?it/s]/Users/b6062805/Documents/Koopman/AutoKoopman/autokoopman/core/tuner.py:149: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
Error: operands could not be broadcast together with shapes (20,) (88,)
end_errors = np.array([s.states.flatten() for s in errors])
Tuning GridSearchTuner: 0%| | 0/5 [00:00<?, ?it/s]
Traceback (most recent call last):
File "AutoKoopman/notebooks/realDataAll.py", line 175, in
perc_error, mse = compute_error(model, test_data)
File "AutoKoopman/notebooks/realDataAll.py", line 103, in compute_error
trajectory = model.solve_ivp(
File "AutoKoopman/autokoopman/core/system.py", line 258, in solve_ivp
states = np.zeros((len(times), len(self.names)))
TypeError: object of type 'NoneType' has no len()

Assertion error for 1d inputs

@EthanJamesLew For systems with 1 input, the following error is thrown: AssertionError: inputs must be able to be converted to 2D numpy array. How do we learn systems with only 1 input?

Better Demo Example

  • set a seed so the example reproduces
  • train on more trajectories for more stable output

Inputs and States

When training, the number of inputs needs to be equal to the number of states. It seems to me that for n states, we should have n-1 inputs (last state doesn't have input). This also applies when running koopman where to output n states we should have an array teval of length n-1 instead of n.

Write Main README

Summary
Write

  • package overview
  • usage
  • examples
  • installation
  • contribution
  • licensing

SindyEstimator example with autokoopman

Thanks for creating autokoopman toolbox. I was trying examples from notebook folder of AutoKoopman and needed the system equation, can you please give the example of using pySindy with autokoopman which can give system equation.

I needed to model a dynamical system because transparent equation will be really helpful.

Error when running F1tenth benchmark with poly observables

@EthanJamesLew @KochdumperNiklas I get this error when running the f1tenth benchmark with polynomial observables.

Error trace:

F1tenthCar --------------------------------------------------------------


ValueError Traceback (most recent call last)
/var/folders/jc/77d0pws90139vg5cc_fg8s_w5hwxm2/T/ipykernel_40751/2272883485.py in
173
174 # compute error
--> 175 perc_error, mse = compute_error(model, test_data)
176
177 # store and print results

/var/folders/jc/77d0pws90139vg5cc_fg8s_w5hwxm2/T/ipykernel_40751/2272883485.py in compute_error(model, test_data)
101 teval = np.linspace(start_time, end_time, len(t.times))
102
--> 103 trajectory = model.solve_ivp(
104 initial_state=iv,
105 tspan=(start_time, end_time),

/usr/local/anaconda3/lib/python3.8/site-packages/autokoopman/core/system.py in solve_ivp(self, initial_state, tspan, teval, inputs, sampling_period)
262 diff[diff < 0.0] = float("inf")
263 tidx = diff.argmin()
--> 264 states[idx + 1] = self.step(
265 float(time), states[idx], np.atleast_1d(inputs[tidx])
266 ).flatten()

/usr/local/anaconda3/lib/python3.8/site-packages/autokoopman/core/system.py in step(self, time, state, sinput)
342 self, time: float, state: np.ndarray, sinput: Optional[np.ndarray]
343 ) -> np.ndarray:
--> 344 return self._step_func(time, state, sinput)
345
346 @Property

/usr/local/anaconda3/lib/python3.8/site-packages/autokoopman/estimator/koopman.py in step_func(t, x, i)
76
77 def step_func(t, x, i):
---> 78 obs = (self.obs(x).flatten())[np.newaxis, :]
79 if self._has_input:
80 return np.real(

/usr/local/anaconda3/lib/python3.8/site-packages/autokoopman/core/observables.py in call(self, X)
30
31 def call(self, X: np.ndarray) -> np.ndarray:
---> 32 return self.obs_fcn(X)
33
34 def obs_grad(self, X: np.ndarray) -> np.ndarray:

/usr/local/anaconda3/lib/python3.8/site-packages/autokoopman/core/observables.py in obs_fcn(self, X)
163
164 def obs_fcn(self, X: np.ndarray) -> np.ndarray:
--> 165 return self.poly.transform(np.atleast_2d(X))
166
167

/usr/local/anaconda3/lib/python3.8/site-packages/sklearn/preprocessing/_polynomial.py in transform(self, X)
378 check_is_fitted(self)
379
--> 380 X = self._validate_data(
381 X, order="F", dtype=FLOAT_DTYPES, reset=False, accept_sparse=("csr", "csc")
382 )

/usr/local/anaconda3/lib/python3.8/site-packages/sklearn/base.py in _validate_data(self, X, y, reset, validate_separately, **check_params)
575 raise ValueError("Validation should be done on X, y or both.")
576 elif not no_val_X and no_val_y:
--> 577 X = check_array(X, input_name="X", **check_params)
578 out = X
579 elif no_val_X and not no_val_y:

/usr/local/anaconda3/lib/python3.8/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator, input_name)
897
898 if force_all_finite:
--> 899 _assert_all_finite(
900 array,
901 input_name=input_name,

/usr/local/anaconda3/lib/python3.8/site-packages/sklearn/utils/validation.py in _assert_all_finite(X, allow_nan, msg_dtype, estimator_name, input_name)
144 "#estimators-that-handle-nan-values"
145 )
--> 146 raise ValueError(msg_err)
147
148 # for object dtype data, we only check for NaNs (GH-13254)

ValueError: Input X contains NaN.
PolynomialFeatures does not accept missing values encoded as NaN natively. For supervised learning, you might want to consider sklearn.ensemble.HistGradientBoostingClassifier and Regressor which accept missing values encoded as NaNs natively. Alternatively, it is possible to preprocess the data, for instance by using an imputer transformer in a pipeline or drop samples with missing values. See https://scikit-learn.org/stable/modules/impute.html You can find a list of all estimators that handle NaN values at the following page: https://scikit-learn.org/stable/modules/impute.html#estimators-that-handle-nan-values

Simulation with Resets

When we simulate with solve_ivp, are we currently resetting the state after each time-step? Or did I get this wrong. If yes, I think it would be good to have an option reset = "all" / "none" / integer (number of time steps between reset) for solve_ivp so that the user can choose.

Verbose options

I think it would be good to add an option verbose=True/False, which can be used to hide the progress bar if it is not desired.

Restructure `auto_koopman`

autokoopman.py is a very complex set of functions to offer high-level functionality for the AutoKoopman library. The core should be extended to make this cleaner and reusable.

More informative error messages

For example when I forget to specify the input for solve_ivp, I get an error message "None type is not subscribale", which is very cryptic.

Obs type "quadratic" throws error if sampling period is too large for "bio2" benchmark

For "larger" sampling periods in the "bio2" benchmark (sampling period >= 0.1), the returned "trajectory.states" from "model.solve_vp" returns rows with nan, such that the trajectory returned is incomplete.
Whilst running it also throws: "RuntimeWarning: overflow encountered in double_scalars".

Could this be an inherit limitation of the obs type such that for more complex systems it requires a smaller sampling rate?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.