Giter VIP home page Giter VIP logo

sbinn's People

Contributors

lululxvi avatar mitchelldaneker avatar zzhang222 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

sbinn's Issues

output_transform function

In the paper under Output-scaling layer, it is mentioned that

"magnitudes of the mean values of the ODE solution"

. But the function implementation seems different.

def output_transform(t, y): idx = 1799 k = (data_y[idx] - data_y[0]) / (data_t[idx] - data_t[0]) b = (data_t[idx] * data_y[0] - data_t[0] * data_y[idx]) / ( data_t[idx] - data_t[0] ) linear = torch.as_tensor(k) * t + torch.as_tensor(b) factor = torch.tanh(t) * torch.tanh(idx - t) return linear + factor * torch.Tensor([1, 1, 1e2, 1, 1, 1]) * y

Could you please explain in detail about the methodology? Also, in the case of lab data, how would it be different?

Errors when running practical_identifiability.jl

I tried running the practical_identifiability.jl file. I get this error:

┌ Warning: Assignment to `F` in soft scope is ambiguous because a global variable by the same name exists: `F` will be treated as a new local. Disambiguate by using `local F` to suppress this warning or `global F` to assign to the existing global variable.
└ @ ~/work/practical_identifiability.jl:89
ERROR: LoadError: UndefVarError: F not defined
Stacktrace:
 [1] top-level scope
   @ ~/work/practical_identifiability.jl:89
in expression starting at /home/jovyan/work/practical_identifiability.jl:84

I tried adding global in front of F on line 89, as suggested. I then get this error:

# julia practical_identifiability.jl
ERROR: LoadError: KeyError: key :tickfont_pointsize not found
Stacktrace:
  [1] getindex(h::Dict{Symbol, Any}, key::Symbol)
    @ Base ./dict.jl:481
  [2] default(k::Symbol)
    @ Plots ~/.julia/packages/Plots/tXtrW/src/args.jl:1083
  [3] warn_on_unsupported_args(pkg::Plots.GRBackend, plotattributes::RecipesPipeline.DefaultsDict)
    @ Plots ~/.julia/packages/Plots/tXtrW/src/args.jl:1616
  [4] _add_the_series(plt::Plots.Plot{Plots.GRBackend}, sp::Plots.Subplot{Plots.GRBackend}, plotattributes::RecipesPipeline.DefaultsDict)
    @ Plots ~/.julia/packages/Plots/tXtrW/src/pipeline.jl:413
  [5] add_series!(plt::Plots.Plot{Plots.GRBackend}, plotattributes::RecipesPipeline.DefaultsDict)
    @ Plots ~/.julia/packages/Plots/tXtrW/src/pipeline.jl:343
  [6] _process_seriesrecipe(plt::Any, plotattributes::Any)
    @ RecipesPipeline ~/.julia/packages/RecipesPipeline/F2mWY/src/series_recipe.jl:46
  [7] _process_seriesrecipes!(plt::Any, kw_list::Any)
    @ RecipesPipeline ~/.julia/packages/RecipesPipeline/F2mWY/src/series_recipe.jl:27
  [8] recipe_pipeline!(plt::Any, plotattributes::Any, args::Any)
    @ RecipesPipeline ~/.julia/packages/RecipesPipeline/F2mWY/src/RecipesPipeline.jl:97
  [9] _plot!(plt::Plots.Plot, plotattributes::Any, args::Any)
    @ Plots ~/.julia/packages/Plots/tXtrW/src/plot.jl:208
 [10] #plot#135
    @ ~/.julia/packages/Plots/tXtrW/src/plot.jl:91 [inlined]
 [11] heatmap(args::Any; kw::Base.Pairs{Symbol, V, Tuple{Vararg{Symbol, N}}, NamedTuple{names, T}} where {V, N, names, T<:Tuple{Vararg{Any, N}}})
    @ Plots ~/.julia/packages/RecipesBase/qpxEX/src/RecipesBase.jl:410
 [12] top-level scope
    @ ~/work/practical_identifiability.jl:95
in expression starting at /home/jovyan/work/practical_identifiability.jl:95

I've tried Googling around a bit, but I couldn't find anything with a similar issue.
I'm running the file through docker version 4.5.1, with julia v1.7.0. Running it on Windows 10 with julia v1.7.2 produced the same errors.

Any help would be appreciated.

epochs is deprecated and will be removed in a future version. Use iterations instead.

Hi, Professor @lululxvi,
I was trying to run sbinn_pytorch.py and got the following error:

(base) yufu@Yus-MacBook-Pro sbinn % python sbinn_pytorch.py
Using backend: pytorch
Other supported backends: tensorflow.compat.v1, tensorflow, jax, paddle.
paddle supports more examples now and is recommended.
Compiling model...
'compile' took 0.000078 s

Warning: epochs is deprecated and will be removed in a future version. Use iterations instead.
Training model...

zsh: segmentation fault  python sbinn_pytorch.py

It seem that there is something wrong with version. my DeepXDE's version is 1.9.1 and my pytorch's version is 2.0.1. Could you provide some insight on this and maybe a possible solution to mitigate this issue?

Best,
Yu.

AttributeError: 'RefVariable' object has no attribute 'tanh'

I'm now using DEEPxde to solve one inverse problem, and I think SBINN can help me get better prediction of unknown parameters.
However, here's one attribute error and I don't know how to fix it.
My code:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_bvp
import sys
import re

import deepxde as dde
from deepxde.backend import tf
import math


L = 0.1  # Column length(m)
d = 0.0212  # Column diameter(m)
A = np.pi*d*d/4  # Column cross sectional area (m2)
e = 0.62  # porosity
v = 30.0/1000.0/1000/60.0/A/e  # Velocity (m/s)
f = 12  # Feed concentration (-)
te = 250  # final time (s)
t_inj = 100  # first injected time for component A and B (s)


scale_x = 100
scale_t = 1/25
scale_y = 1
L_scaled = L * scale_x
v_scaled = v * scale_x / scale_t
t_scaled = te * scale_t
t_inj_scaled = t_inj * scale_t


traindata1 = np.load(r"C:\Users\10716\OneDrive\桌面\DeepXDE (5)\SMB\Inverse_outlet_A_injected100s_MC.npy")
traindata2 = np.load(r"C:\Users\10716\OneDrive\桌面\DeepXDE (5)\SMB\Inverse_outlet_B_injected100s_MC.npy")


xx1 = traindata1[1:, 2:3]
tt1 = traindata1[1:, 0:1]
Ca = traindata1[1:, 1:2]
xx2 = traindata2[1:, 2:3]
tt2 = traindata2[1:, 0:1]
Cb = traindata2[1:, 1:2]

observe_x1, observe_Ca = np.hstack((scale_x * xx1, scale_t * tt1)), Ca
observe_x2, observe_Cb = np.hstack((scale_x * xx2, scale_t * tt2)), Cb

observe_y1 = dde.icbc.PointSetBC(observe_x1, Ca, component=0)
observe_y2 = dde.icbc.PointSetBC(observe_x2, Cb, component=1)

observe_x = np.vstack((observe_x1, observe_x2))


ka_scaled_ = dde.Variable(0.0)  # Mass transfer coefficient of A (1/s)
kb_scaled_ = dde.Variable(0.0)  # Mass transfer coefficient of B (1/s)

Ha_ = dde.Variable(0.0)  # Henry constant of A (-)
Hb_ = dde.Variable(0.0)  # Henry constant of B (-)

ka_scaled = 5 * np.tanh(ka_scaled_) + 7.5
kb_scaled = 5 * np.tanh(kb_scaled_) + 7.5

Ha = 4 * tf.tanh(Ha_) + 5
Hb = 4 * tf.tanh(Hb_) + 5

geom = dde.geometry.Interval(0, L_scaled)
timedomain = dde.geometry.TimeDomain(0, t_scaled)
geomtime = dde.geometry.GeometryXTime(geom, timedomain)


# gPINNs
def pde(x, y):
    y_star = scale_y * y
    ca, cb, qa, qb = y_star[:, 0:1], y_star[:, 1:2], y_star[:, 2:3], y_star[:, 3:4]

    ca_x = dde.grad.jacobian(y_star, x, i=0, j=0)
    ca_t = dde.grad.jacobian(y_star, x, i=0, j=1)
    ca_xx = dde.grad.hessian(y_star, x, i=0, j=0, component=0)
    ca_xt = dde.grad.hessian(y_star, x, i=0, j=1, component=0)
    ca_tt = dde.grad.hessian(y_star, x, i=1, j=1, component=0)

    cb_x = dde.grad.jacobian(y_star, x, i=1, j=0)
    cb_t = dde.grad.jacobian(y_star, x, i=1, j=1)
    cb_xx = dde.grad.hessian(y_star, x, i=0, j=0, component=1)
    cb_xt = dde.grad.hessian(y_star, x, i=0, j=1, component=1)
    cb_tt = dde.grad.hessian(y_star, x, i=1, j=1, component=1)

    qa_x = dde.grad.jacobian(y_star, x, i=2, j=0)
    qa_t = dde.grad.jacobian(y_star, x, i=2, j=1)
    qa_xt = dde.grad.hessian(y_star, x, i=0, j=1, component=2)
    qa_tt = dde.grad.hessian(y_star, x, i=1, j=1, component=2)

    qb_x = dde.grad.jacobian(y_star, x, i=3, j=0)
    qb_t = dde.grad.jacobian(y_star, x, i=3, j=1)
    qb_xt = dde.grad.hessian(y_star, x, i=0, j=1, component=3)
    qb_tt = dde.grad.hessian(y_star, x, i=1, j=1, component=3)

    massbalance_liquid_a = (
            ca_t + (1 - e) / e * qa_t + v_scaled * ca_x
    )
    massbalance_liquid_b = (
            cb_t + (1 - e) / e * qb_t + v_scaled * cb_x
    )
    massbalance_solid_a = (
            ka_scaled * (Ha * ca - qa) - qa_t
    )
    massbalance_solid_b = (
            kb_scaled * (Hb * cb - qb) - qb_t
    )
    additional_loss_term_1 = (
            ca_tt + (1 - e) / e * qa_tt + v_scaled * ca_xt
    )
    additional_loss_term_2 = (
            ca_xt + (1 - e) / e * qa_xt + v_scaled * ca_xx
    )
    additional_loss_term_3 = (
            cb_tt + (1 - e) / e * qb_tt + v_scaled * cb_xt
    )
    additional_loss_term_4 = (
            cb_xt + (1 - e) / e * qb_xt + v_scaled * cb_xx
    )
    additional_loss_term_5 = (
            ka_scaled * (Ha * ca_t - qa_t) - qa_tt
    )
    additional_loss_term_6 = (
            ka_scaled * (Ha * ca_x - qa_x) - qa_xt
    )
    additional_loss_term_7 = (
            kb_scaled * (Hb * cb_t - qb_t) - qb_tt
    )
    additional_loss_term_8 = (
            kb_scaled * (Hb * cb_x - qb_x) - qb_xt
    )
    return [massbalance_liquid_a, massbalance_liquid_b, massbalance_solid_a, massbalance_solid_b,
            additional_loss_term_1, additional_loss_term_2, additional_loss_term_3, additional_loss_term_4,
            additional_loss_term_5, additional_loss_term_6, additional_loss_term_7, additional_loss_term_8]


def feed_concentration(x):
    # x, t = np.split(x, 2, axis=1)
    return np.piecewise(x[:, 1:], [x[:, 1:] <= t_inj_scaled, x[:, 1:] > t_inj_scaled], [lambda x: f, lambda x: 0])


def boundary_beg(x, on_boundary):
    return on_boundary and np.isclose(x[0], 0)


def boundary_end(x, on_boundary):
    return on_boundary and np.isclose(x[0], L_scaled)


bc_beg_ca = dde.DirichletBC(
    geomtime, feed_concentration, boundary_beg, component=0
)
bc_beg_cb = dde.DirichletBC(
    geomtime, feed_concentration, boundary_beg, component=1
)

bc_end_ca = dde.NeumannBC(
    geomtime, lambda x: 0, boundary_end, component=0
)
bc_end_cb = dde.NeumannBC(
    geomtime, lambda x: 0, boundary_end, component=1
)

initial_condition_ca = dde.IC(
    geomtime, lambda x: 0, lambda _, on_initial: on_initial, component=0
)
initial_condition_cb = dde.IC(
    geomtime, lambda x: 0, lambda _, on_initial: on_initial, component=1
)
initial_condition_qa = dde.IC(
    geomtime, lambda x: 0, lambda _, on_initial: on_initial, component=2
)
initial_condition_qb = dde.IC(
    geomtime, lambda x: 0, lambda _, on_initial: on_initial, component=3
)


data = dde.data.TimePDE(
    geomtime,
    pde,
    [initial_condition_ca,
     initial_condition_cb,
     bc_beg_ca,
     bc_beg_cb,
     bc_end_ca,
     bc_end_cb,
     observe_y1,
     observe_y2],
    num_domain=1300,
    num_initial=200,
    num_boundary=500,
    anchors=observe_x
)
net = dde.maps.PFNN([2, [20, 20, 20, 20], [20, 20, 20, 20], [20, 20, 20, 20], 4], "tanh", "Glorot uniform")
gPINNmodel = dde.Model(data, net)
gPINNmodel.compile("adam", lr=0.001, external_trainable_variables=[ka_scaled, kb_scaled, Ha, Hb], loss_weights=[60, 20, 1e-3, 1e-3, 1, 1, 1, 1, 1e-2, 1e-2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])
variable = dde.callbacks.VariableValue([ka_scaled, kb_scaled, Ha, Hb], period=1000, filename="variables.dat")
losshistory, train_state = gPINNmodel.train(epochs=2000, callbacks=[variable], disregard_previous_best=True)


# plots
"""Get the domain: x = L_scaled and t from 0 to t_scaled"""
X_nn = L_scaled * np.ones((100, 1))
T_nn = np.linspace(0, t_scaled, 100).reshape(100, 1)
X_pred = np.append(X_nn, T_nn, axis=1)

y_pred = gPINNmodel.predict(X_pred)
ca_pred, cb_pred, qa_pred, qb_pred = y_pred[:, 0:1], y_pred[:, 1:2], y_pred[:, 2:3], y_pred[:, 3:]
plt.figure()
plt.plot(T_nn / scale_t, ca_pred, color='blue', linewidth=3., label='Concentration A')
plt.plot(T_nn / scale_t, cb_pred, color='red', linewidth=3., label='Concentration B')
# plt.plot(X_pred, qa_pred)
# plt.plot(X_pred, qb_pred)

plt.legend()
plt.grid(True)
plt.xlabel('t')
plt.ylabel('Concentration')

plt.show()

dde.saveplot(losshistory, train_state, issave=True, isplot=True)

The error:

AttributeError: 'RefVariable' object has no attribute 'tanh'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\10716\OneDrive\桌面\DeepXDE\SMB\Inverse Model_gPINNs.py", line 61, in <module>
    ka_scaled = 5 * np.tanh(ka_scaled_) + 7.5
TypeError: loop of ufunc does not support argument 0 of type RefVariable which has no callable tanh method

Thank you very much for any suggestion.

No module named 'tensorflow.python'

Hi Professor @lululxvi,
I was trying to run the codes. However, as I was running sbinn_pytorch.py, I got the following error:

Output from spyder call 'get_namespace_view':
Using backend: tensorflow.compat.v1

Traceback (most recent call last):

  File "C:\Users\akash\Desktop\SBINN\sbinn_pytorch.py", line 9, in <module>
    import deepxde as dde

  File "C:\Users\akash\anaconda3\lib\site-packages\deepxde\__init__.py", line 4, in <module>
    from . import backend

  File "C:\Users\akash\anaconda3\lib\site-packages\deepxde\backend\__init__.py", line 85, in <module>
    load_backend(get_preferred_backend())

  File "C:\Users\akash\anaconda3\lib\site-packages\deepxde\backend\__init__.py", line 30, in load_backend
    mod = importlib.import_module(".%s" % mod_name.replace(".", "_"), __name__)

  File "C:\Users\akash\anaconda3\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)

  File "C:\Users\akash\anaconda3\lib\site-packages\deepxde\backend\tensorflow_compat_v1\__init__.py", line 5, in <module>
    from .tensor import *  # pylint: disable=redefined-builtin

  File "C:\Users\akash\anaconda3\lib\site-packages\deepxde\backend\tensorflow_compat_v1\tensor.py", line 4, in <module>
    import tensorflow.compat.v1 as tf

  File "C:\Users\akash\anaconda3\lib\site-packages\tensorflow\__init__.py", line 37, in <module>
    from tensorflow.python.tools import module_util as _module_util

ModuleNotFoundError: No module named 'tensorflow.python' 

It necessarily isn't an issue relating to SBINNs, but might be one related to DeepXDE. Can you provide some insight on this and maybe a possible solution to mitigate this issue?

Best,
Akash.

AttributeError: 'function' object has no attribute 'concat'

Hey Professor @lululxvi,
I was also trying to run the codes on Google Colab. On doing that I encountered the following error on sbinn_pytorch.py (the similar error was also found in sbinn_tf.py). My output console screen is, as follows:

Compiling model...
Building feed-forward neural network...
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
[<ipython-input-15-8901f526b953>](https://localhost:8080/#) in <module>()
    192 meal_q = meal_data[1]
    193 
--> 194 sbinn(t[:1800], y[:1800], meal_t, meal_q)
    195 
    196 variable_to_parameter_transform.variable_file(10000, 1000, 1000000, "variables.csv")

6 frames
[<ipython-input-15-8901f526b953>](https://localhost:8080/#) in sbinn(data_t, data_y, meal_t, meal_q)
    165     maxepochs = 1000000
    166 
--> 167     model.compile("adam", lr=1e-3, loss_weights=[0, 0, 0, 0, 0, 0, 1e-2])
    168     model.train(epochs=firsttrain, display_every=1000)
    169     model.compile(

[/usr/local/lib/python3.7/dist-packages/deepxde/utils/internal.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
     20     def wrapper(*args, **kwargs):
     21         ts = timeit.default_timer()
---> 22         result = f(*args, **kwargs)
     23         te = timeit.default_timer()
     24         print("%r took %f s\n" % (f.__name__, te - ts))

[/usr/local/lib/python3.7/dist-packages/deepxde/model.py](https://localhost:8080/#) in compile(self, optimizer, lr, loss, metrics, decay, loss_weights, external_trainable_variables)
    105 
    106         if backend_name == "tensorflow.compat.v1":
--> 107             self._compile_tensorflow_compat_v1(lr, loss_fn, decay, loss_weights)
    108         elif backend_name == "tensorflow":
    109             self._compile_tensorflow(lr, loss_fn, decay, loss_weights)

[/usr/local/lib/python3.7/dist-packages/deepxde/model.py](https://localhost:8080/#) in _compile_tensorflow_compat_v1(self, lr, loss_fn, decay, loss_weights)
    119         """tensorflow.compat.v1"""
    120         if not self.net.built:
--> 121             self.net.build()
    122         if self.sess is None:
    123             self.sess = tf.Session()

[/usr/local/lib/python3.7/dist-packages/deepxde/utils/internal.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
     20     def wrapper(*args, **kwargs):
     21         ts = timeit.default_timer()
---> 22         result = f(*args, **kwargs)
     23         te = timeit.default_timer()
     24         print("%r took %f s\n" % (f.__name__, te - ts))

[/usr/local/lib/python3.7/dist-packages/deepxde/nn/tensorflow_compat_v1/fnn.py](https://localhost:8080/#) in build(self)
     55         y = self.x
     56         if self._input_transform is not None:
---> 57             y = self._input_transform(y)
     58         for i in range(len(self.layer_size) - 2):
     59             if self.batch_normalization is None and self.layer_normalization is None:

[<ipython-input-15-8901f526b953>](https://localhost:8080/#) in feature_transform(t)
    134         t = 0.01 * t
    135 
--> 136         return torch.concat((
    137                 t,
    138                 torch.sin(t),

AttributeError: 'function' object has no attribute 'concat' 

Is there something I am doing wrong? I will be extremely grateful for your help.

Best,
Akash.

Result doesn't seem right.

I have trained the model on pytorch backend for 1,000,000 epochs, and ploted 'glucose.dat' and 'test.dat' together.
Other then the observed 'G', results of the others doesn't look right.

Have I made a mistake on scaling the values or is it a normal result?
loss

compare

Use get_variable to limited parameters search range

Dear Dr.Lu
I noticed you have done some scaling in the definition of parameters.
image
However in the definition of dynamic equations you didn't rescale the parameters ,shouldn't these two equations be multiplied by 100
,like Ub = 100 * get_variable(72 / 100, Ub_) . Won't it affect the results if you don't scale?

get_variable function

def get_variable(v, var): low, up = v * 0.2, v * 1.8 l = (up - low) / 2 #mean v1 = l * torch.tanh(var) + l + low return v1
Could you please explain the idea behind the preprocessing of the variables?

How to setup "ODE_weights" and “data_weights” ?

Hi ,Thank you for your contribyution to this work.
When I run a inverse ODE problem,I am confused about the setting of ODE weights. What rules or experience do you use to determine ODE_weights and data_weights?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.