Giter VIP home page Giter VIP logo

fbpinns's Introduction

Hi there 👋

I am a Postdoctoral Fellow at the ETH Zürich AI Center and member of the ETH Zürich Computational and Applied Mathematics Lab. My research focuses on developing scientific machine learning algorithms that can scale to solve real-world problems. Check out my website / blog and LinkedIn profile for more. I am also a ML Team Lead for NASA's Frontier Development Lab.

I'm open to collaboration on projects involving scientific machine learning / physics-informed machine learning. Check out my FBPINNs repository for our latest work on scalable physics-informed neural networks.

fbpinns's People

Contributors

benmoseley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fbpinns's Issues

Environment Setting

Hi, could you provide a .yml file in anaconda about your running environment setting, there are many environment errors when I try to reproduce your work, thank you!

About the dataset

I am a beginner and I would like to know if this method requires a dataset or data for training, which I didn't find in the case, thanks

some confusion about unormalization

Your code is awesome! 👍
And i have some difficulties in understanding your code.

   #  codes from main.full_model_FBPINN
    y = y * c.Y_N[1] + c.Y_N[0]

I think this sentence is to achieve unnormalization and i found define c.Y_N in constants.py.

  # codes from constants.Constants
   w = 1e-10
   self.Y_N = (0,1/self.P.w**2)# mu, sd

This seems to multiply a large constant. I don't understand why this is necessary and why choose this value?

Run Error, need help!

I run the code "1. Defining your own problem - 1D harmonic oscillator", meet the error below:

[INFO] 2024-02-17 20:46:06 - <fbpinns.constants.Constants object at 0x0000025EF2406950>
run: test
domain: <class 'fbpinns.domains.RectangularDomainND'>
domain_init_kwargs: {'xmin': array([0.]), 'xmax': array([1.])}
problem: <class 'main.HarmonicOscillator1D'>
problem_init_kwargs: {'d': 2, 'w0': 80}
decomposition: <class 'fbpinns.decompositions.RectangularDecompositionND'>
decomposition_init_kwargs: {'subdomain_xs': [array([0. , 0.07142857, 0.14285714, 0.21428571, 0.28571429,
0.35714286, 0.42857143, 0.5 , 0.57142857, 0.64285714,
0.71428571, 0.78571429, 0.85714286, 0.92857143, 1. ])], 'subdomain_ws': [array([0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15, 0.15,
0.15, 0.15, 0.15, 0.15])], 'unnorm': (0.0, 1.0)}
network: <class 'fbpinns.networks.FCN'>
network_init_kwargs: {'layer_sizes': [1, 32, 1]}
n_steps: 20000
scheduler: <class 'fbpinns.schedulers.AllActiveSchedulerND'>
scheduler_kwargs: {}
ns: ((200,),)
n_test: (500,)
sampler: grid
optimiser: <function adam at 0x0000025EF2756CB0>
optimiser_kwargs: {'learning_rate': 0.001}
seed: 0
summary_freq: 1000
test_freq: 1000
model_save_freq: 10000
show_figures: True
save_figures: False
clear_output: True
hostname: desktop-lostn3v

[INFO] 2024-02-17 20:46:07 - Total number of subdomains: 15
[INFO] 2024-02-17 20:46:08 - Total number of trainable parameters:
[INFO] 2024-02-17 20:46:08 - network: 1,455
[INFO] 2024-02-17 20:46:09 - Total number of constraints: 2
[INFO] 2024-02-17 20:46:09 - Computing exact solution..
[INFO] 2024-02-17 20:46:09 - Computing done
[INFO] 2024-02-17 20:46:09 - Getting test data inputs..
[INFO] 2024-02-17 20:46:12 - [i: 0/20000] Updating active inputs..
[INFO] 2024-02-17 20:46:12 - [i: 0/20000] Average number of points/dimension in active subdomains: 28.00
[INFO] 2024-02-17 20:46:15 - [i: 0/20000] Updating active inputs done (2.92 s)
[INFO] 2024-02-17 20:46:15 - [i: 0/20000] Compiling update step..
[INFO] 2024-02-17 20:46:15 - x_batch
[INFO] 2024-02-17 20:46:15 - (200, 1), float32, JVPTracer
[INFO] 2024-02-17 20:46:15 - x_take
[INFO] 2024-02-17 20:46:15 - (418, 1), float32, JVPTracer
[INFO] 2024-02-17 20:46:15 - x_batch
[INFO] 2024-02-17 20:46:15 - (1, 1), float32, JVPTracer
[INFO] 2024-02-17 20:46:15 - x_take
[INFO] 2024-02-17 20:46:15 - (2, 1), float32, JVPTracer
[INFO] 2024-02-17 20:46:17 - [i: 0/20000] Compiling done (2.35 s)

ValueError Traceback (most recent call last)
Cell In[7], line 4
1 from fbpinns.trainers import FBPINNTrainer
3 run = FBPINNTrainer(c)
----> 4 all_params = run.train()

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:660, in FBPINNTrainer.train(self)
657 # report initial model
658 if i == 0:
659 u_test_losses, start1, report_time =
--> 660 self._report(i, pstep, fstep, u_test_losses, start0, start1, report_time,
661 u_exact, x_batch_test, test_inputs, all_params, all_opt_states, model_fns, problem, decomposition,
662 active, merge_active, active_opt_states, active_params, x_batch,
663 lossval)
665 # take a training step
666 lossval, active_opt_states, active_params = update(active_opt_states,
667 active_params, fixed_params, static_params_dynamic,
668 takess, constraints)# note compiled function only accepts dynamic arguments

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:715, in FBPINNTrainer.report(self, i, pstep, fstep, u_test_losses, start0, start1, report_time, u_exact, x_batch_test, test_inputs, all_params, all_opt_states, model_fns, problem, decomposition, active, merge_active, active_opt_states, active_params, x_batch, lossval)
713 # take test step
714 if test
:
--> 715 u_test_losses = self.test(
716 x_batch_test, u_exact, u_test_losses, x_batch, test_inputs, i, pstep, fstep, start0, active, all_params, model_fns, problem, decomposition)
718 # save model
719 if model_save
:

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:736, in FBPINNTrainer.test(self, x_batch_test, u_exact, u_test_losses, x_batch, test_inputs, i, pstep, fstep, start0, active, all_params, model_fns, problem, decomposition)
733 takes, all_ims, cut_all = test_inputs
734 all_params_cut = {"static":cut_all(all_params["static"]),
735 "trainable":cut_all(all_params["trainable"])}
--> 736 u_test, wp_test
, us_test_, ws_test_, us_raw_test_ = FBPINN_model_jit(all_params_cut, x_batch_test, takes, model_fns, verbose=False)
737 if all_params["static"]["problem"]["dims"][1] == 1:# 1D plots require full lines, not just hist stats
739 m, ud, n = all_params["static"]["decomposition"]["m"], all_params["static"]["problem"]["dims"][0], x_batch_test.shape[0]

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:320, in FBPINN_model_jit(all_params, x_batch, takes, model_fns, verbose)
318 def FBPINN_model_jit(all_params, x_batch, takes, model_fns, verbose=True):
319 all_params_dynamic, all_params_static = partition(all_params)
--> 320 return _FBPINN_model_jit(all_params_dynamic, all_params_static, x_batch, takes, model_fns, verbose)

[... skipping hidden 12 frame]

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:317, in _FBPINN_model_jit(all_params_dynamic, all_params_static, x_batch, takes, model_fns, verbose)
314 @partial(jax.jit, static_argnums=(1,4,5))
315 def _FBPINN_model_jit(all_params_dynamic, all_params_static, x_batch, takes, model_fns, verbose):
316 all_params = combine(all_params_dynamic, all_params_static)
--> 317 return FBPINN_model(all_params, x_batch, takes, model_fns, verbose)

File D:\py_test\PINN\FBPINNs-main\fbpinns\trainers.py:164, in FBPINN_model(all_params, x_batch, takes, model_fns, verbose)
162 # apply POU and sum
163 u = jnp.concatenate([us, ws], axis=1)# (s, ud+1)
--> 164 u = jax.ops.segment_sum(u, p_take, indices_are_sorted=False, num_segments=len(np_take))# (_, ud+1)
165 wp = u[:,-1:]
166 u = u[:,:-1]/wp

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\ops\scatter.py:251, in segment_sum(data, segment_ids, num_segments, indices_are_sorted, unique_indices, bucket_size, mode)
201 def segment_sum(data: ArrayLike,
202 segment_ids: ArrayLike,
203 num_segments: int | None = None,
(...)
206 bucket_size: int | None = None,
207 mode: lax.GatherScatterMode | None = None) -> Array:
208 """Computes the sum within segments of an array.
209
210 Similar to TensorFlow's `segment_sum
(...)
249 Array([1, 5, 4], dtype=int32)
250 """
--> 251 return _segment_update(
252 "segment_sum", data, segment_ids, lax.scatter_add, num_segments,
253 indices_are_sorted, unique_indices, bucket_size, reductions.sum, mode=mode)

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\ops\scatter.py:183, in _segment_update(name, data, segment_ids, scatter_op, num_segments, indices_are_sorted, unique_indices, bucket_size, reducer, mode)
180 if bucket_size is None:
181 out = jnp.full((num_segments,) + data.shape[1:],
182 _get_identity(scatter_op, dtype), dtype=dtype)
--> 183 return _scatter_update(
184 out, segment_ids, data, scatter_op, indices_are_sorted,
185 unique_indices, normalize_indices=False, mode=mode)
187 # Bucketize indices and perform segment_update on each bucket to improve
188 # numerical stability for operations like product and sum.
189 assert reducer is not None

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\ops\scatter.py:80, in _scatter_update(x, idx, y, scatter_op, indices_are_sorted, unique_indices, mode, normalize_indices)
77 # XLA gathers and scatters are very similar in structure; the scatter logic
78 # is more or less a transpose of the gather equivalent.
79 treedef, static_idx, dynamic_idx = jnp._split_index_for_jit(idx, x.shape)
---> 80 return _scatter_impl(x, y, scatter_op, treedef, static_idx, dynamic_idx,
81 indices_are_sorted, unique_indices, mode,
82 normalize_indices)

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\ops\scatter.py:115, in _scatter_impl(x, y, scatter_op, treedef, static_idx, dynamic_idx, indices_are_sorted, unique_indices, mode, normalize_indices)
112 x, y = promote_dtypes(x, y)
114 # Broadcast y to the slice output shape.
--> 115 y = jnp.broadcast_to(y, tuple(indexer.slice_shape))
116 # Collapse any None/jnp.newaxis dimensions.
117 y = jnp.squeeze(y, axis=indexer.newaxis_dims)

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\numpy\lax_numpy.py:1218, in broadcast_to(array, shape)
1214 @util.implements(np.broadcast_to, lax_description="""
1215 The JAX version does not necessarily return a view of the input.
1216 """)
1217 def broadcast_to(array: ArrayLike, shape: DimSize | Shape) -> Array:
-> 1218 return util._broadcast_to(array, shape)

File D:\ProgramData\anaconda3\envs\py310nn\lib\site-packages\jax_src\numpy\util.py:428, in _broadcast_to(arr, shape)
426 if nlead < 0 or not compatible:
427 msg = "Incompatible shapes for broadcasting: {} and requested shape {}"
--> 428 raise ValueError(msg.format(arr_shape, shape))
429 diff, = np.where(tuple(not core.definitely_equal(arr_d, shape_d)
430 for arr_d, shape_d in safe_zip(arr_shape, shape_tail)))
431 new_dims = tuple(range(nlead)) + tuple(nlead + diff)

ValueError: Incompatible shapes for broadcasting: (1046, 2) and requested shape (1046, 1, 2)

running error

Hello,I am trying to use your program to write a FBPINN that solves a 3D temperature field, because the problem is more complex, I use the soft-constrained loss function form in the problem class.Because used a soft constrained loss function, so def boundary_condition() is not used in the problem class.However, when commenting out FBPINN and just running PINN, the error still occurs.
I am getting an error while running the file paper_main_3D.py.,
The screenlog:
RUN: final_FBPINN_TemperatureField3D_16h_2l_30b_r_0.1w_All
P: <problems.TemperatureField3D object at 0x0000020521F32808>
SUBDOMAIN_XS: [array([0. , 0.25, 0.5 , 0.75, 1. ]), array([0. , 0.25, 0.5 , 0.75, 1. ]), array([0. , 0.25, 0.5 , 0.75, 1. ])]
SUBDOMAIN_WS: [array([0.025, 0.025, 0.025, 0.025, 0.025]), array([0.025, 0.025, 0.025, 0.025, 0.025]), array([0.025, 0.025, 0.025, 0.025, 0.025])]
BOUNDARY_N: (0.1,)
Y_N: (0, 1)
ACTIVE_SCHEDULER: <class 'active_schedulers.AllActiveSchedulerND'>
ACTIVE_SCHEDULER_ARGS: ()
DEVICE: 0
MODEL: <class 'models.FCN'>
N_HIDDEN: 16
N_LAYERS: 2
BATCH_SIZE: (30, 30, 30)
RANDOM: True
LRATE: 0.001
N_STEPS: 150000
SEED: 123
BATCH_SIZE_TEST: (100, 100, 10)
PLOT_LIMS: (0.4, True)
SUMMARY_FREQ: 250
TEST_FREQ: 5000
MODEL_SAVE_FREQ: 10000
SHOW_FIGURES: False
SAVE_FIGURES: False
CLEAR_OUTPUT: False
SUMMARY_OUT_DIR: results/summaries/final_FBPINN_TemperatureField3D_16h_2l_30b_r_0.1w_All/
MODEL_OUT_DIR: results/models/final_FBPINN_TemperatureField3D_16h_2l_30b_r_0.1w_All/
HOSTNAME: laptop-8t1qinad

Device: cuda:0
Main thread ID: 7240
Torch seed: 123
0 Active updated:
[[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]

[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]

[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]

[[1 1 1 1]
[1 1 1 1]
[1 1 1 1]
[1 1 1 1]]]
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([729, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([729, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([648, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([648, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([576, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([576, 3])
torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([512, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([0, 1]) torch.Size([512, 3])
Process Process-1:1:
Traceback (most recent call last):
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\10614\Desktop\FBPINNs-main\fbpinns\trainersBase.py", line 111, in train_models_multiprocess
run.train()
File "C:\Users\10614\Desktop\FBPINNs-main\fbpinns\main.py", line 267, in train
xs, yjs, yjs_sum, loss = self._train_step(models, optimizers, c, D, i)
File "C:\Users\10614\Desktop\FBPINNs-main\fbpinns\main.py", line 185, in _train_step
yj = c.P.boundary_condition(x, *yj, *c.BOUNDARY_N)# problem-specific
TypeError: boundary_condition() missing 1 required keyword-only argument: 'args'
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\connection.py", line 302, in _recv_bytes
overlapped=True)
BrokenPipeError: [WinError 109] 管道已结束。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\site-packages\tensorboardX\event_file_writer.py", line 202, in run
data = self._queue.get(True, queue_wait_duration)
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\queues.py", line 108, in get
res = self._recv_bytes()
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "C:\Users\10614\Anaconda3\envs\TF2.1\lib\multiprocessing\connection.py", line 321, in _recv_bytes
raise EOFError
EOFError

Running Error

Hello,

I am getting an error while running the file paper_main_1D.py. I am using Spyder IDE on Anaconda.

"OSError: [WinError 1455] The paging file is too small for this operation to complete. Error loading "C:\Users\gaura\Anaconda3\envs\torch\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll" or one of its dependencies."

Modifiying the wave 3D Problem

download
Thank you for the innovative contribution!

I tried modifying the wave 3D problem to have the following boundary conditions:

u(x,y,0) = 0
u(0,0,t) = 2 sin (2 pi t) #time-dependent source

in this way:


 def boundary_condition(self, x, u, dudt, d2udx2, d2udy2, d2udt2, sd):
        
        # Apply u = tanh^2((t-0)/sd)*NN + sigmoid((d-t)/sd)*exp( -(1/2)((x/sd)^2+(y/sd)^2) )  ansatz
        
        t_2, dudt_2, d2udt_22 = boundary_conditions.tanh2_2(x[:,2:3], 0, sd)
        s, _, d2uds2   = boundary_conditions.sigmoid_2(-x[:,2:3], -2*sd, 0.2*sd)# beware (!) this gives correct 2nd order gradients but negative 1st order (sign flip!)
        
        mx = my = 0; 
        sx = sy = self.source_sd
        xnx, xny = (x[:,0:1]-mx)/sx, (x[:,1:2]-my)/sy
        #exp = torch.exp(-0.5*(xnx**2 + xny**2))
        exp = torch.exp(-0.5*(xnx**2 + xny**2))*0 #IC = 0 instead of exp
        #Initial GP
        f = exp
        d2udfx2 = (1/sx**2) * ((xnx**2) - 1)*exp
        d2udfy2 = (1/sy**2) * ((xny**2) - 1)*exp
        
        u_new   = t_2*u + s*f
        d2udx2_new = t_2*d2udx2 + s*d2udfx2
        d2udy2_new = t_2*d2udy2 + s*d2udfy2
        d2udt2_new = d2udt_22*u + 2*dudt_2*dudt + t_2*d2udt2 + d2uds2*f

        #Zero Ic and BC
#         u_new   = t_2*u *0
#         d2udx2_new = t_2*d2udx2 
#         d2udy2_new = t_2*d2udy2 
#         d2udt2_new = d2udt_22*u 
        
        return u_new, dudt, d2udx2_new, d2udy2_new, d2udt2_new# skip updating first order gradients (not needed for loss)
    

I also made some changes for the FD file to be this way:


import numpy as np
import time
from seismic_CPML_helper import get_dampening_profiles

# todo: is this faster in parallel with np.roll?

def seismicCPML2D_wS(NX,
                NY,
                NSTEPS,
                DELTAX,
                DELTAY,
                DELTAT,
                NPOINTS_PML,
                velocity,
                density,
                initial_pressures,
                f0=20.,
                dtype=np.float32,
                output_wavefields=True,
                gather_is=None):
    
    "Run seismicCPML2D"
    
    ## INPUT PARAMETERS
    velocity = velocity.astype(dtype)
    density = density.astype(dtype)
    
    if type(gather_is) != type(None): output_gather = True
    else: output_gather = False
    
    K_MAX_PML = 1.
    ALPHA_MAX_PML = 2.*np.pi*(f0/2.)# from Festa and Vilotte
    NPOWER = 2.# power to compute d0 profile
    Rcoef = 0.001
    
    STABILITY_THRESHOLD = 1e25
    ##
    
    
    # STABILITY CHECKS
    
    # basically: delta x > np.sqrt(3) * max(v) * delta t
    courant_number = np.max(velocity) * DELTAT * np.sqrt(1/(DELTAX**2) + 1/(DELTAY**2))
    if courant_number > 1.: raise Exception("ERROR: time step is too large, simulation will be unstable %.2f"%(courant_number))
    if NPOWER < 1: raise Exception("ERROR: NPOWER must be greater than 1")
    
    
    # GET DAMPENING PROFILES
    
    [[a_x, a_x_half, b_x, b_x_half, K_x, K_x_half],
     [a_y, a_y_half, b_y, b_y_half, K_y, K_y_half]] = get_dampening_profiles(velocity, NPOINTS_PML, Rcoef, K_MAX_PML, ALPHA_MAX_PML, NPOWER, DELTAT, DELTAS=(DELTAX, DELTAY), dtype=dtype, qc=False)
    

    # INITIALISE ARRAYS
    
    kappa = density*(velocity**2)
    
    # pressure_present = initial_pressures[1].astype(dtype)
    # pressure_past = initial_pressures[0].astype(dtype)

    #zero IC
    pressure_present = np.zeros((NX, NY), dtype=dtype)
    pressure_past = np.zeros((NX, NY), dtype=dtype)
    
    
    memory_dpressure_dx = np.zeros((NX, NY), dtype=dtype)
    memory_dpressure_dy = np.zeros((NX, NY), dtype=dtype)
    
    memory_dpressurexx_dx = np.zeros((NX, NY), dtype=dtype)
    memory_dpressureyy_dy = np.zeros((NX, NY), dtype=dtype)
    
    if output_wavefields: wavefields = np.zeros((NSTEPS, NX, NY), dtype=dtype)
    if output_gather: gather = np.zeros((gather_is.shape[0], NSTEPS), dtype=dtype)
    
    # precompute density_half arrays
    density_half_x = np.pad(0.5 * (density[1:NX,:]+density[:NX-1,:]), [[0,1],[0,0]], mode="edge")
    density_half_y = np.pad(0.5 * (density[:,1:NY]+density[:,:NY-1]), [[0,0],[0,1]], mode="edge")
    
    
    # RUN SIMULATION
    
    start = time.time()
    for it in range(NSTEPS):
                
        # compute the first spatial derivatives divided by density
        
        value_dpressure_dx = np.pad((pressure_present[1:NX,:]-pressure_present[:NX-1,:]) / DELTAX, [[0,1],[0,0]], mode="constant", constant_values=0.)
        value_dpressure_dy = np.pad((pressure_present[:,1:NY]-pressure_present[:,:NY-1]) / DELTAY, [[0,0],[0,1]], mode="constant", constant_values=0.)
    
        memory_dpressure_dx = b_x_half * memory_dpressure_dx + a_x_half * value_dpressure_dx
        memory_dpressure_dy = b_y_half * memory_dpressure_dy + a_y_half * value_dpressure_dy
    
        value_dpressure_dx = value_dpressure_dx / K_x_half + memory_dpressure_dx
        value_dpressure_dy = value_dpressure_dy / K_y_half + memory_dpressure_dy
    
        pressure_xx = value_dpressure_dx / density_half_x
        pressure_yy = value_dpressure_dy / density_half_y
        
        # compute the second spatial derivatives
        
        value_dpressurexx_dx = np.pad((pressure_xx[1:NX,:]-pressure_xx[:NX-1,:]) / DELTAX, [[1,0],[0,0]], mode="constant", constant_values=0.)
        value_dpressureyy_dy = np.pad((pressure_yy[:,1:NY]-pressure_yy[:,:NY-1]) / DELTAY, [[0,0],[1,0]], mode="constant", constant_values=0.)
    
        memory_dpressurexx_dx = b_x * memory_dpressurexx_dx + a_x * value_dpressurexx_dx
        memory_dpressureyy_dy = b_y * memory_dpressureyy_dy + a_y * value_dpressureyy_dy
        
        value_dpressurexx_dx = value_dpressurexx_dx / K_x + memory_dpressurexx_dx
        value_dpressureyy_dy = value_dpressureyy_dy / K_y + memory_dpressureyy_dy
        
        dpressurexx_dx = value_dpressurexx_dx
        dpressureyy_dy = value_dpressureyy_dy
        
        # apply the time evolution scheme
        # we apply it everywhere, including at some points on the edges of the domain that have not be calculated above,
        # which is of course wrong (or more precisely undefined), but this does not matter because these values
        # will be erased by the Dirichlet conditions set on these edges below
        # pressure_future =   - pressure_past \
        #                     + 2 * pressure_present \
        #                     + DELTAT*DELTAT*(dpressurexx_dx+dpressureyy_dy)*kappa

        # Stepping with a source function, p is passed from the main file as p0 (Gaussian pulse)
        # location of source is passed within p0
        def func_t(p,t_inst):
            Amp = 1
            freq = 1
            t = t_inst*DELTAT
            return p*Amp*np.sin(2*np.pi**freq*t)
            
        pressure_future =   - pressure_past \
                            + 2 * pressure_present \
                            + DELTAT*DELTAT*(dpressurexx_dx+dpressureyy_dy)*kappa \
                            + DELTAT*DELTAT*func_t(initial_pressures[1].astype(dtype),it)
                
        
        # apply Dirichlet conditions at the bottom of the C-PML layers,
        # which is the right condition to implement in order for C-PML to remain stable at long times
        
        # Dirichlet conditions
        pressure_future[0,:] = pressure_future[-1,:] = 0.
        pressure_future[:,0] = pressure_future[:,-1] = 0.
        
        if output_wavefields: wavefields[it,:,:] = np.copy(pressure_present)
        if output_gather:
            gather[:,it] = np.copy(pressure_present[gather_is[:,0], gather_is[:,1]])# nb important to copy

        
        # check stability of the code, exit if unstable
        if(np.max(np.abs(pressure_present)) > STABILITY_THRESHOLD):
            raise Exception('code became unstable and blew up')
    
        # move new values to old values (the present becomes the past, the future becomes the present)
        pressure_past = pressure_present
        pressure_present = pressure_future
    
        #print(pressure_past.dtype, pressure_future.dtype, wavefields.dtype, gather.dtype)
        if it % 10000 == 0 and it!=0:
            rate = (time.time()-start)/10.
            print("[%i/%i] %.2f s per step"%(it, NSTEPS, rate))
            start = time.time()
    
    output = [None, None]
    if output_wavefields: output[0]=wavefields
    if output_gather: output[1]=gather
    return output

Mainly attempting to change the IC and add a time-dependent source term to the equation so it becomes:


 d^2 u     d^2 u        1   d^2 u
 ------ + ------  -  ---   ------   =   S(x,y,t)
 dx^2       dy^2      c^2  dt^2
        

where,

Amp = 1
freq = 1
sx = sy =self.source_sd
mx = 0; my = 0
GP = torch.exp(-0.5*(( (x[:,0:1]-mx)/sx)**2 + ((x[:,1:2]-my)/sy)**2 ))

S = Amp * GP * torch.sin(2*np.pi*freq*x[:,2:3])  #The Source function

but the results are as shown in the image. So, my questions are:

  1. How can I implement my specified boundary and initial conditions in a better way than the one I tried (if my attempt was correct)? I don't fully understand how to use the implemented boundary condition helper functions to implement my specific equations.

 d^2 u     d^2 u        1   d^2 u
 ------ + ------  -  ---   ------   =   S(x,y,t)
 dx^2       dy^2      c^2  dt^2
        
Boundary conditions:
        u(x,y,0) = 0
        u(0,0,t) = 2 * sin (2 * pi * t)	#time-dependent source
        

The results in the image were executed with these batch sizes:

batch_size = (30,30,30)
batch_size_test = (40,40,15)

because of the limited memory on my GPU.

  1. Does this affect the results? If so, how can I increase batch_size_test without getting OOM error?

Thanks again! Looking forward to your reply.

Wavefield "c" Value

Dear Ben,

Thank you for this creative work! I saw in the code that you can use two types of c values; one is obtained by _constant_c() and the other by _gaussian_c(). I understand the conceptual difference between them, but I wonder if I wanted to test a two wavefeild with two values of c (e.g.: c1 and c2) how can I write a function to do that?

My initial attempt was to write:

def _variable_c(x):
        "Defines a variable velocity model"
        return ((np.tanh(80*(x-2.5))+5)/4) * torch.ones((x.shape[0],1), dtype=x.dtype, device= x.device) 

I then use this assignment:

 c = _variable_c 
 c = c(torch.from_numpy(np.stack([xx,yy],-1).reshape((NX*NY,2)))).numpy().reshape(NX,NY)

The idea that I'm trying to achieve is to assign the value of c=1 to x<2.5 and c=1.5 to x>2.5 by using the "tanh" function for a smooth transition of c value.
I'm getting this error:

~\AppData\Local\Temp/ipykernel_16716/2797370787.py in exact_solution(x, batch_size)
     51 
     52         c = _variable_c #_constant_c
---> 53         c = c(torch.from_numpy(np.stack([xx,yy],-1).reshape((NX*NY,2)))).numpy().reshape(NX,NY)
     54         #c(c0,torch.from_numpy(np.stack([xx,yy],-1).reshape((NX*NY,2)))).numpy().reshape(NX,NY)
     55 

ValueError: cannot reshape array of size 2000000 into shape (1000,1000)

Any ideas on how to achieve this and get around the error?

Empty gradient when the dim of u is 2

I am trying to modeling 1D maxwell equation: dH/dt - dE/dx = 0 ; dE/dt - dH/dx = source, The problem dim is set to (2,2) and I can implement this equation in pytorch but when comes to jax, I found the gradient of E, dE/dt, dE/dx are both empty([]) which results in nan in loss. Please help to identify the issue. Thanks @benmoseley

import jax
import jax.numpy as jnp
import numpy as np

from fbpinns.domains import RectangularDomainND
from fbpinns.problems import Problem
from fbpinns.decompositions import RectangularDecompositionND
from fbpinns.networks import FCN
from fbpinns.constants import Constants, get_subdomain_ws
from fbpinns.trainers import FBPINNTrainer, PINNTrainer

class FDTD2D(Problem):
    """Solves the time-dependent (1+1)D Maxwell equation with constant velocity
        u = [H, E]
        d H     dE
        ---- - ----  =  0
        dt       dx

        d E     dH
        ---- - ----  =  0
        dt      dx

        Boundary conditions:
        E(x,0) = exp( -(1/2)((x/sd)^2) )
        du
        --(x,0) = 0
        dt
    """

    @staticmethod
    def init_params(c=1, sd=1):

        static_params = {
            "dims":(2,2),
            "c":c,
            "sd":sd,
            }
        return static_params, {}

    @staticmethod
    def sample_constraints(all_params, domain, key, sampler, batch_shapes):

        # physics loss
        x_batch_phys = domain.sample_interior(all_params, key, sampler, batch_shapes[0])
        required_ujs_phys = (
            (0,(0,)),#dH / dx
            (1,(0,)),#dE / dx
            (0,(1,)),#dH / dt
            (1,(1,)),#dE /dt
        )
        return [[x_batch_phys, required_ujs_phys],]

    @staticmethod
    def constraining_fn(all_params, x_batch, u):
        c = all_params["static"]["problem"]["c"]
        sd = all_params["static"]["problem"]["sd"]
        t = x_batch[:,1:2]

        u = (jax.nn.tanh(c*t/(2*sd))**2)*u# constrains u(x,y,0) = u_t(x,y,0) = 0
        return u

    @staticmethod
    def loss_fn(all_params, constraints):
        c = all_params["static"]["problem"]["c"]
        sd = all_params["static"]["problem"]["sd"]
        x_batch, dHdx, dEdx, dHdt, dEdt = constraints[0]
#        jax.debug.print("ret {}", dEdx) # dEdx and dEdt being [] while dHdx and dHdt are OK

        x, t = x_batch[:,0:1], x_batch[:,1:2]

        e = -0.5*(x**2 + t**2)/(sd**2)
        s = 2e3*(1+e)*jnp.exp(e)# ricker source term

        phys1 = jnp.mean((dHdx - dEdt - s)**2)
        phys2 = jnp.mean((dEdx - dHdt)**2)
        phys = phys1 + phys2
        return phys

    @staticmethod
    def exact_solution(all_params, x_batch, batch_shape):

        key = jax.random.PRNGKey(0)
        return jax.random.normal(key, (x_batch.shape[0],1))


subdomain_xs = [np.linspace(-1,1,5), np.linspace(0,1,5)]
subdomain_ws = get_subdomain_ws(subdomain_xs, 1.9)

c = Constants(
    run="test",
    domain=RectangularDomainND,
    domain_init_kwargs=dict(
        xmin=np.array([-1,0]),
        xmax=np.array([1,1]),
    ),
    problem=FDTD2D,
    problem_init_kwargs=dict(
        c=1, sd=0.1,
    ),
    decomposition=RectangularDecompositionND,
    decomposition_init_kwargs=dict(
        subdomain_xs=subdomain_xs,
        subdomain_ws=subdomain_ws,
        unnorm=(0.,1.),
    ),
    network=FCN,
    network_init_kwargs=dict(
        layer_sizes=[2,32,2],
    ),
    ns=((100,50),),
    n_test=(100,5),
    n_steps=5000,
    optimiser_kwargs=dict(learning_rate=1e-3),
    summary_freq=200,
    test_freq=200,
    show_figures=True,
    clear_output=True,
)

#run = FBPINNTrainer(c)
#run.train()

c["network_init_kwargs"] = dict(layer_sizes=[2,128,128,2])
run = PINNTrainer(c)
run.train()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.