Giter VIP home page Giter VIP logo

willcox-research-group / rom-operator-inference-python3 Goto Github PK

View Code? Open in Web Editor NEW
65.0 7.0 30.0 52.75 MB

Operator Inference for data-driven, non-intrusive model reduction of dynamical systems.

Home Page: https://willcox-research-group.github.io/rom-operator-inference-Python3

License: MIT License

Python 99.80% Makefile 0.20%
model-reduction dynamical-systems ode pde python python-package scientific-machine-learning

rom-operator-inference-python3's Introduction

rom-operator-inference-python3's People

Contributors

algopaul avatar mtezzele avatar shanemcq18 avatar swischuk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

rom-operator-inference-python3's Issues

Prediction steps fail when input has more than 1 feature

The objective
Prediction when input has more than one feature

Already tried tests
Previously, my data has pressure as the only feature. I am trying to stack another feature as follow

$$ \begin{bmatrix} p_{1,1} & p_{1,2} & \cdots & p_{1,n} \\ vz_{1,1} & vz_{1,2} & \cdots & vz_{1,n} \\ \vdots & \vdots & \vdots & \vdots \\ p_{m,1} & p_{m,2} & \cdots & p_{m,n} \\ vz_{m,1} & vz_{m,2} & \cdots & vz_{m,n} \\ \end{bmatrix} $$

with $vz$ as velocity in $z$ ax, $p$ as pressure. With this config, I face a this error message /rom_operator_inference/core/nonparametric/_public.py:486: IntegrationWarning: Required step size is less than spacing between numbers.

Code

train_data_derivative = opinf.pre.ddt(data_to_train, DELTA_T, order=6)        # Calculate the time derivative matrix.
Vr, _ = opinf.pre.pod_basis(data_to_train, r=r)    
rom = opinf.ContinuousOpInfROM(modelform="cAHG") 
rom.fit(Vr, data_to_train, train_data_derivative)
prediction = rom.predict(train_data[:,-1], np.arange(0.0499, 0.0597, 1e-4))   # predict from the last column of train data

Potential bug saving parametric ROM class.

Describe the bug
I am trying to save an "InterpolatedContinuousOpInfROM" I have fit to file. There seems to be an issue with the save function here. When I try and save, I get an error about a LinearOperator object not having the parameters attribute.

To Reproduce

import numpy as np
import opinf

N = 10
K = 5
n_snaps = 8
Phi = np.random.normal(size=(N,K))
states = np.random.normal(size=(N,n_snaps))
states_dot = np.random.normal(size=(N,n_snaps))
params = np.array([1])
rom = opinf.InterpolatedContinuousOpInfROM("A", InterpolatorClass='auto')
rom.fit(Phi,parameters=params, states=[states], ddts=[states_dot])
rom.save('my_model.h5',overwrite=True)

Expected behavior
From the documentation, it looks like the model should save and then be loadable.

Output
Traceback (most recent call last):
File "test_save.py", line 13, in
rom.save('my_model.h5',overwrite=True)
File "/Users/ejparis/.local/lib/python3.9/site-packages/opinf/roms/interpolate/base.py", line 382, in save
hf.create_dataset(f"operators/{key}
", data=op.matrices)
File "/Users/ejparis/.local/lib/python3.9/site-packages/opinf/roms/interpolate/_base.py", line 381, in save
hf.create_dataset("parameters", data=op.parameters)
AttributeError: 'LinearOperator' object has no attribute 'parameters'

Additional context
I have no issues in the non-parametric context.

Failing test

This test is failing:
pytest tests/_core/_interpolate/test_inferred.py

Allow for complex values states/operators/snapshot-matrices

Is your feature request related to a problem? Please describe.
It would be great to allow for complex-valued states/operators/snapshot matrices.

Describe the solution you'd like
Allow integration of complex-valued ODEs + solve the least squares problem using complex-valued snapshot matrices. I believe this could be done easily by changing the dtype in certain definitions to allow complex128.

Data download script

Data files used in the examples live on the data branch. We should have a script in the docs/ folder that downloads the data and puts it in the correct place so that the tutorials can be executed locally. This will also enable testing the documentation compilation through GitHub Actions (we had an action previously but it would pass even if a notebook cell failed to execute).

TODO:

  • Write the script
  • Update contributor guidelines
  • Remove .pre-commit.yaml from the data branch?
  • Reinstate GitHub Action for checking that the documentation compiles (there is also a LaTeX issue with plot labels, fix this too)

Shifted Operator Inference

New feature: Shifted Operator Inference from the paper Predicting solar wind streams from the inner-heliosphere to Earth via shifted operator inference by Opal Issan (@opaliss) and Boris Kramer (@bokramer). This strategy shifts the state snapshots to a moving coordinate frame. In the paper, this is notated in Eq. (16),

$$\mathbf{u}_{i} \approx u(\mathbf{x},t_{i}) \mapsto \tilde{u}(\tilde{\mathbf{x}}(\mathbf{x}, t_{i}), t_{i}) \approx \tilde{\mathbf{u}}_{i}$$

where

$$\tilde{\mathbf{x}}(\mathbf{x},t) = \mathbf{x} + \mathbf{c}(t).$$

@opaliss will take the lead on this. Essentially this will involve writing a new transformer class, perhaps WaveshiftTransformer? See opinf.pre._shiftscale.ShiftScaleTransformer for another transformer to compare to. Implementation steps:

  • Create a new file in /src/opinf/pre/.
  • Define the class so it inherits from opinf.pre.TransformerTemplate and implements fit(), transform(), and inverse_transform().
  • Import the new class in /src/opinf/pre/__init__.py.
  • Write unit tests for the new class in a new file in /tests/pre/.
  • Compile the docs (make docs) and check that the automatically generated documentation page looks good.
  • If possible, demonstrate using the class in a new Jupyter notebook tutorial in docs/source/tutorials/.

Recovery of Lotka-Volterra system

Hello,

I am trying to use this package to tackle the Lotka-Volterra system. I don't really expect it to be reducible, but I would like to recover the model matrices using your library. In theory, since the right-hand side is polynomial, I would expect to recover it pretty accurately. However, I do not manage to do so.

Please find attached the Python file that I am using and the problems that I encounter:

  • Depending on the discretization of the time interval, the ROM can or can't be simulated up to the final time (100 steps VS 300 steps, for instance);
  • Even in the 100 steps case, the matrices that are recovered are not precisely the same ones that I input, which leads to having peaks that do not quite match with the full-order model;
  • If I try to consider more than 1 pair (pairs do not mutually interact) of predator-prey, the results are completely different from the full-order model.

Thank you in advance.

import numpy as np
import scipy.sparse as sparse
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
import opinf

# Construct the temporal domain.
t0, tf = 0, 20                 # Initial and final time.
k = tf*100 + 1                 # Temporal grid size. 100 works, 300 no
t = np.linspace(t0, tf, k)      # Temporal grid.
dt = t[1] - t[0]                # Temporal resolution.

print(f"Temporal step size δt = {dt}")

size_prey = 1 # equal to size predator

alpha = np.linspace(2,4,size_prey)
gamma = np.linspace(1,2,size_prey)
beta = np.linspace(1,2.5,size_prey)
delta = np.linspace(3,7,size_prey)
print("alpha,beta,gamma,delta",alpha,beta,gamma,delta)

# Construct the full-order state matrix A.
# diags = np.array([1,-2,1]) / (dx**2)
A = sparse.diags(np.concatenate((alpha,-gamma)))
B = sparse.diags(np.stack((-beta,delta)),[size_prey,-size_prey],(2*size_prey,2*size_prey))


def LotkaVolterra(t, x):
    return A @ x + (B @ x) * x

np.random.seed(1)
q0 = np.random.uniform(low=4., high=5.,size=2*size_prey)

# Solve the Lotka-Volterra model 
Q = solve_ivp(LotkaVolterra, [t0,tf], q0, t_eval=t, method="BDF", max_step=dt).y

# Plot Lotka-Volterra solutions
fig1 = plt.plot(Q.T)
plt.savefig("LotkaVolterra.png")
plt.close()

basis = opinf.pre.PODBasis().fit(Q,cumulative_energy=0.99)                # Construct the low-dimensional basis.
Qdot = opinf.pre.ddt(Q, dt, order=6)                    # Calculate the time derivative matrix.
rom = opinf.ContinuousOpInfROM(modelform="AH")        # Define the model structure.
solver = opinf.lstsq.L2Solver(regularizer=1e-10)       # Select a least-squares solver with regularization.
rom.fit(basis, Q, Qdot, solver=solver)                  # Construct the ROM by solving the operator inference.
Q_ROM = rom.predict(q0, t, method="BDF", max_step=dt)   # Simulate the ROM.

print("reduced A",rom.A_.entries)
print("reduced H",rom.H_.entries)

print("Shapes HF results and ROM results", Q.shape,Q_ROM.shape)
print("Errors in Frobenius norm", opinf.post.frobenius_error(Q, Q_ROM))               # Calculate the relative error of the ROM simulation.

fig=plt.plot(Q_ROM.T)
plt.savefig("LotkaVolterra_rom.png")
plt.close()
print("--------------------------")

Receive `Required step size is less than spacing between numbers` when using `cAH` mode.

The objective
I tried to run the code with cA mode. After that I include model form H mode to cA to improve performance. However, I receive the error IntegrationWarning: Required step size is less than spacing between numbers from rom_operator_inference/core/nonparametric/_public.py

Already tried tests

def prepare_data_and_train(data_to_train, r, regularizer=1e-2):
  train_data_derivative = opinf.pre.ddt(data_to_train, DELTA_T, order=6)        # Calculate the time derivative matrix.
  Vr, _ = opinf.pre.pod_basis(data_to_train, r=r)    
  rom = opinf.ContinuousOpInfROM(modelform="cAH")  
  rom.fit(Vr, data_to_train, train_data_derivative, regularizer=regularizer)
  return rom

During my simulation, I took care to not let the Courant strays far from 1.0

How could I debug this error?

Operator-centric Design

The existing ROM classes in the package can really only handle models of the form

$$ \frac{d}{dt}\mathbf{q}(t) = \mathbf{c} + \mathbf{Aq}(t) + \mathbf{H}[\mathbf{q}(t) \otimes \mathbf{q}(t)] + \mathbf{Bu}(t) $$

and similar. The user chooses which terms to include when the ROM is initialized (e.g., ContinuousOpInfROM(modelform="cAHB")) and there are properties for each of the operators, called c_, A_, H_, and B_, that are pretty strict about getting shapes right and so on. This setup is restrictive and a bit painful for development – right now if we want to add a new term to the above equation, e.g., a linear interaction between state and inputs or higher-order polynomial interactions, it involves a lot of changes in a lot of places. Furthermore, it's not easy to impose system-level structure, like the following:

$$ \begin{align*} \frac{d}{dt}\mathbf{q}_{1}(t) &= \mathbf{Aq}_{2}(t) + \mathbf{H}[\mathbf{q}_{1}(t) \otimes \mathbf{q}_{2}(t)] \\ \frac{d}{dt} \mathbf{q}_{2} (t) &= \mathbf{Aq}_{1}(t). \end{align*} $$

Here, we'd ideally like to specify that there is a quadratic interaction between $\mathbf{q}_{1}$ and $\mathbf{q}_{2}$ (but not $\mathbf{q}_{1}$ and $\mathbf{q}_{1}$ or $\mathbf{q}_{2}$ and $\mathbf{q}_{2}$) in the first equation, but not in the second equation, and so on. This is also an issue for Hamiltonian OpInf, see #45.

Here's a proposed solution. Instead of feeding the ROM a string that indicates the structure, feed it a list of operators (allowing certain strings as shortcuts for the monolithic case). So something like this:

import opinf

c = opinf.operators.ConstantOperator()
A = opinf.operators.LinearOperator()
H = opinf.operators.QuadraticOperator()
B = opinf.operators.InputOperator()

rom = opinf.ContinuousOpInfROM([c, A, H, B])

This would abolish the restrictive c_, A_, H_, and B_ properties in favor of an operators property in the ROM class. For the systems example, I can imagine something like this:

A12 = opinf.operators.LinearOperatorMulti(1, 2)
H112 = opinf.operators.QuadraticOperatorMulti(1, 2)
A21 = opinf.operators.LinearOperatorMulti(2, 1)

rom = opinf.ContinuousOpInfROMMulti([[A12, H112], [A21]])

These operators may need some additional information, such as the size of $\mathbf{q}_{1}$ $\mathbf{q}_{2}$ (which may not necessarily be equal).

To do this, each operator class would need a method like datablock(Q, U) that takes in states/inputs and spits out the chunk of the data matrix corresponding to that operator.
For example, ConstantOperator.datablock(Q, U) should always returns a vector of ones, LinearOperator.datablock(Q, U) should return Q, QuadraticOperator.datablock(Q, U) should return Q ⊗ Q, and InputOperator(Q, U) should return U.
Then, in the ROM class, we construct the data matrix with a single line:

data_matrix = np.hstack([op.datablock(Q, U).T for op in self.operators])

and similar for unpacking the operator matrix after solving the least-squares problem. In contrast, right now we have something like

data_matrix_chunks = []
if 'c' in self.modelform:
    data_matrix_chunks.append(ones.T)
if 'A' in self.modelform:
    data_matrix_chunks.append(Q.T)
if 'H' in self.modelform:
    data_matrix_chunks.append((QQ).T)
if 'B' in self.modelform:
    data_matrix_chunks.append(U.T)
# More 'if' statements needed each time we add a new type of operator
data_matrix = np.hstack(data_matrix_chunks)

This change will introduce a few API changes, but it should mostly add functionality to the package that isn't currently available.

Custom ODE Solvers in predict()

Currently, opinf.models.ContinuousModel.predict() wraps scipy.integrate.solve_ivp(). It would be nice to be able to pass a custom time-stepper, probably as the method attribute. This is also important for certain types of models, like Hamiltonian systems, which require symplectic integrators.

  • Write an IntegratorTemplate somewhere (new opinf.integrate submodule?)
  • Update predict() so that method can be an integrator object.
  • Write a few common integrators as examples (forward/backward Euler, IMEX, etc.).

Need to think about how the integrator should interact with the list of operators or the model's rhs() method.

Hamiltonian Operator Inference

New feature: Hamiltonian Operator Inference from the paper Hamiltonian operator inference: Physics-preserving learning of reduced-order models for canonical Hamiltonian systems by Harsh Sharma (@harsh5332392), Zhu Wang, and Boris Kramer (@bokramer). The goal is to use Operator Inference for a canonical Hamiltonian system to learn a ROM that

  1. is a canonical Hamiltonian system;
  2. retains the physical interpretation of the state variables and preserves the coupling structure; and
  3. respects the symmetric property of structure-preserving space discretizations.

@harsh5332392 will take the lead on this. To begin, the main steps will be creating a SymplecticBasis class (cotangent lift algorithm) and a HamiltonianModel class that does the constrained optimization in fit() and symplectic integration in predict().

Suggested implementation steps:

Basis

  • Create a new file, /src/opinf/basis/_symplectic.py
  • Write a SymplecticBasis class that inherits from opinf.basis.LinearBasis and implement fit(). Or, you might be able to do this quickly by inheriting from the PODBasisMulti class, which represents a block diagonal POD (one POD for each variable).
  • Import the new class in /src/opinf/basis/__init__.py.
  • Write and run tests for the new classes in a new file /tests/basis/test_symplectic.py.
  • Compile the docs (make docs) and check that the automatically generated documentation page looks good.
  • If possible, write a short section about this class in docs/source/guides/reduction.md. We should probably turn this into a notebook that shows the different kinds of basis functions you get from POD and the symplectic approach.

Model Class

  • Create a new file, /src/opinf/models/multi/_hamiltonian.py.
  • Write a HamiltonianModel class in the new file.
    • The fit() method should take in the data matrices, do the constrained optimizations, and initialize the operators of the ROM.
    • Implement the predict() method with a symplectic integrator.
  • Write and run tests for the new class in a new file /tests/models/multi/test_hamiltonian.py
  • Compile the docs and check that the automatically generated documentation page looks good.
  • Write a tutorial as a new Jupyter Notebook, docs/source/tutorials/hamiltonian.ipynb with the linear wave equation example.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.