Giter VIP home page Giter VIP logo

l96_demo's Introduction

Lorenz 1996 Two Time-Scale Model

build-and-deploy-book Binder

This repository provides a set of notebooks to pedagogically introduce the reader to the problem of parameterization in the climate sciences and how machine learning may be used to address it.

The original goal for these notebooks in this Jupyter book was for our M2LInES team to work together and learn from each other; in particular, to get up to speed on the key scientific aspects of our collaboration (parameterizations, machine learning, data assimilation, uncertainty quantification) and to develop new ideas. Now this material is presented here for anyone to learn from. The primary audience for this guide is researchers and students trained in climate science wanting to be introduced to machine learning or trained in machine learning and want to get acquainted with the parameterization problem in climate sciences. Since the book addresses people from multiple fields the level of pre-requisites required is minimal; a basic understanding of Python and some experience with PDEs or dynamical systems and solving them numerically (an introductory course in numerical methods) can be helpful. This book could be used as a teaching tool, for self-study, or as a reference manual.

The easiest way to read the content (non-interactively) is to view it through the book's website. For more interactive experience either use the binder link provided above, or setup the appropriate environments on your own machine and interact with each notebook indivdually.

Structure and Organization of the Repo

This project uses Jupyter Book to organize a collection of Jupyter Notebooks into a website.

  • The notebooks all live in the notebooks directory. Note that the notebooks are stored in "stripped" form, without any outputs of execution saved. (They are executed as part of the build process.)
  • The table of contents is located in _toc.yml.
  • The book configuration is in _config.yml.
  • The references are in _references.bib.

Setting up the Environment

Installing Julia

The equation discovery notebooks, require Julia to be installed on your machine. Depending on the platform, download and install the appropriate Julia binary from here.

Note: The PyCall package does not work on Silicon Macbooks. This causes build errors for the equation discovery notebooks when building on a silicon macbook. So to build the entire project, you can either omit the equation discovery notebooks from the _toc.yml file if you are working on a silicon macbook, or you can build the project on a linux machine directly.

Installing Python Dependencies

The python packages required to run and build the notebooks are listed in the environment.yaml and the requirements.txt file. To install all these dependencies in a virtual environment, run

$ conda env create -f environment.yaml
$ conda activate L96M2lines
$ python -m pip install -r requirements.txt
$ python -c "import pysr; pysr.install()"

To speed up the continuous integration, we also generated a conda lock file for linux as follows.

$ conda-lock lock --mamba -f environment.yaml -p linux-64 --kind explicit

This file lives in conda-linux-64.lock and should be regenerated periorically.

Building the Book

Most readers interested in learning from this material could just run individual notebooks once they have setup the appropriate environment, or use the binder link provided at the top of this readme. However, some more advanced readers, particularly those wishing to contribute back, may be interested in building the book locally for testing purposes.

To build the book locally, you should first create and activate your environment, as described above. Then run

$ jupyter book build .

When you run this command, the notebooks will be executed. The built html will be placed in '_build/html`. To preview the book, run

$ cd _build/html
$ python -m http.server

The build process can take a long time, so we have configured the setup to use jupyter-cache. If you re-run the build command, it will only re-execute notebooks that have been changed. The cache files live in _build/.jupyter_cache

To check the status of the cache, run

$ jcache cache list -p _build/.jupyter_cache

To remove cached notebooks, run

$ jcache cache remove -p _build/.jupyter_cache

Contributing and reporting problems

If you find any problems or mistakes in the material, think something is not clear, or spot errors in the codes, please open a new issue to report these or seek help.

Also, we welcome any contributions that you would like to make. These can come in the form of:

  • Providing solutions to any errors or clarity issues you or others may have spotted and reported on the issues page.
  • Suggest and create new notebooks for any additional concepts that are not currently covered.

These contributions can also be made by opening a new issue and starting a discussion about what you would like to contribute, and eventually submitting changes in the form of a new pull request.

Pre-commit

We use pre-commit to keep the notebooks clean. In order to use pre-commit, run the following command in the repo top-level directory: The pre commit

$ pre-commit install

At this point, pre-commit will automatically be run every time you make a commit.

Pull Requests and Feature Branches

In order to contribute a PR, you should start from a new feature branch.

$ git checkout -b my_new_feature

(Replace my_new_feature with a descriptive name of the feature you're working on.)

Make your changes and then make a new commit:

$ git add changed_file_1.ipynb changed_file_2.ipynb
$ git commit -m "message about my new feature"

You can also automatically commit changes to existing files as:

$ git commit -am "message about my new feature"

Then push your changes to your remote on GitHub (usually call origin

$ git push origin my_new_feature

Then navigate to https://github.com/m2lines/L96_demo to open your pull request.

Synchronizing from upstream

To synchronize your local branch with upstream changes, first make sure you have the upstream remote configured. To check your remotes, run

$ git remote -v
origin	[email protected]:rabernat/L96_demo.git (fetch)
origin	[email protected]:rabernat/L96_demo.git (push)
upstream	[email protected]:m2lines/L96_demo.git (fetch)
upstream	[email protected]:m2lines/L96_demo.git (push)

If you don't have upstream, you need to add it as follows

$ git remote add upstream [email protected]:m2lines/L96_demo.git

Then, make sure you are on the main branch locally:

$ git checkout main

And then run

$ git fetch upstream
$ git merge upstream/main

Ideally, you will not have any merge conflicts. You are now ready to make a new feature branch.

References

Arnold, H. M., I. M. Moroz, and T. N. Palmer. “Stochastic Parametrizations and Model Uncertainty in the Lorenz ’96 System.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 371, no. 1991 (May 28, 2013): 20110479. https://doi.org/10.1098/rsta.2011.0479.

Brajard, Julien, Alberto Carrassi, Marc Bocquet, and Laurent Bertino. “Combining Data Assimilation and Machine Learning to Infer Unresolved Scale Parametrization.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 379, no. 2194 (April 5, 2021): 20200086. https://doi.org/10.1098/rsta.2020.0086.

Schneider, Tapio, Shiwei Lan, Andrew Stuart, and João Teixeira. “Earth System Modeling 2.0: A Blueprint for Models That Learn From Observations and Targeted High-Resolution Simulations.” Geophysical Research Letters 44, no. 24 (December 28, 2017): 12,396-12,417. https://doi.org/10.1002/2017GL076101.

Wilks, Daniel S. “Effects of Stochastic Parametrizations in the Lorenz ’96 System.” Quarterly Journal of the Royal Meteorological Society 131, no. 606 (2005): 389–407. https://doi.org/10.1256/qj.04.03.

l96_demo's People

Contributors

dhruvbalwada avatar iamshubhamgupto avatar rabernat avatar adcroft avatar pre-commit-ci[bot] avatar shan18 avatar laurezanna avatar johannag126 avatar bhouri0412 avatar feiyulu avatar breichl avatar lzampier avatar noraloose avatar gentine avatar lesommer avatar asross avatar arthurbarthe avatar mitchbushuk avatar sshamekh avatar william-gregory avatar chzhangudel avatar

Stargazers

Daniel Camarena avatar Dan Amrhein avatar C avatar Julia Simpson avatar Felipe avatar  avatar Ellen Davenport avatar Yueyang Lu avatar Martin Thomas Brolly avatar  avatar Calvin Nesbitt avatar Tianzhang Cai avatar Fiona Chow avatar David John Gagne avatar  avatar Ali Aydogdu avatar Peng Ziyi avatar Parisima Abdali avatar  avatar Mr. Amílcar José AJ avatar Yingyu Peng avatar Joseph Ko avatar Kaitlyn Loftus avatar  avatar Israel Zúñiga de la Mora avatar  avatar Anthony Holmes avatar Brendan Meade avatar Willy Hagi avatar Taher Chegini avatar  avatar

Watchers

 avatar Joan Bruna avatar  avatar  avatar  avatar  avatar  avatar Judith Berner avatar William avatar Chris Pedersen avatar Ziwei Li avatar  avatar Anthony Holmes avatar

l96_demo's Issues

Use one single model module

The original L96_model.py file has been copied between every different section. This is a potential source of errors and inconsistencies. Here is a list of all of the modules in the repo

./01Intro/L96_model.py
./02type-of-parametrization/L96_model.py
./03Data-Assimilation/DA_methods.py
./03Data-Assimilation/L96_model.py
./04Subgrid-parametrization-pytorch/L96_model_XYtend.py
./05Offline-DA-increments/DA_methods.py
./05Offline-DA-increments/L96_model.py
./06Different-ML-models/L96_model.py
./06Different-ML-models/L96_model_XYtend.py
./07-Interpretability-of-ML/L96_model_XYtend.py
./07-Interpretability-of-ML/LRP_gradient.py
./08-Implementation/L96_model_XYtend.py

We should try to consolidate around a single file.

Steps:

  • Understand the differences between the different L96_model modules in the different sections
  • Create on single "best" version
  • Move everything into a single directory (notebooks and modules)
  • Refactor all the notebooks to use the same model file

Furthermore, we should try to eliminate the use of modules for anything other than L96 itself. This means L96_model_XYtend.py, LRP_gradient.py, DA_methods.py: all of this should be in notebooks, not modules.

Gradient descent video does not render consistently

On @yaniyuval's laptop the video looks right. But me and @adcroft see something like this

download.mp4

The relevant code is

w = torch.as_tensor([-2.0, -3])
w = nn.Parameter(w)
lr = 0.001
fig, ax = plt.subplots(figsize=(10, 6))
ax.scatter(x[:, 0], y)
(line,) = ax.plot(
    x[:, 0],
    lin(w.detach().numpy()[0], w.detach().numpy()[1], x.detach().numpy()[:, 0]),
    c="firebrick",
)
# line, = ax.plot(x[:,0], y, c='firebrick')
ax.set_title("Loss = 0.00")
plt.close()


def animate(i):
    for t in range(100):
        l = update2(t)
    ax.set_title("Loss = %.2f" % l)
    line.set_data(
        x.detach().numpy()[:, 0],
        lin(w.detach().numpy()[0], w.detach().numpy()[1], x.detach().numpy()[:, 0]),
    )
    return (line,)

anim = FuncAnimation(fig, animate, frames=70, interval=150, blit=True);

# You might have some difficulties running this cell without importing certain packages.
# might need to install: conda install -c conda-forge ffmpeg
HTML(anim.to_html5_video())

Closing page

After discussion wtih @dhruvbalwada , one idea is to close the notebook by adding some comments and references from M2LinES' work.

Team meetings topics suggestions

Please suggest topics for team meetings below. If you see an already suggested topic that interests you, give it a thumbs up.

Jcache check status error

Error

On following the README to check status of the cache

jcache cache list -p _build/.jupyter_cache

Did NOT run condo-lock command as it is designed for linux

Output

(L96M2lines) shubham@Shubhams-MBP L96_demo % jcache cache list -p _build/.jupyter_cache
Cache path: _build/.jupyter_cache
The cache does not yet exist, do you want to create it? [y/N]: y
No Cached Notebooks
Traceback (most recent call last):
  File "/Users/shubham/opt/anaconda3/envs/L96M2lines/bin/jcache", line 10, in <module>
    sys.exit(jcache())
  File "/Users/shubham/opt/anaconda3/envs/L96M2lines/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/Users/shubham/opt/anaconda3/envs/L96M2lines/lib/python3.9/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/Users/shubham/opt/anaconda3/envs/L96M2lines/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/shubham/opt/anaconda3/envs/L96M2lines/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/shubham/opt/anaconda3/envs/L96M2lines/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/shubham/opt/anaconda3/envs/L96M2lines/lib/python3.9/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/Users/shubham/opt/anaconda3/envs/L96M2lines/lib/python3.9/site-packages/jupyter_cache/cli/commands/cmd_cache.py", line 44, in list_caches
    tabulate_cache_records(records, hashkeys=hashkeys, path_length=path_length)
  File "/Users/shubham/opt/anaconda3/envs/L96M2lines/lib/python3.9/site-packages/jupyter_cache/utils.py", line 83, in tabulate_cache_records
    import tabulate
ModuleNotFoundError: No module named 'tabulate'

Device: Mac OS (M1)

Fix

conda install tabulate

Output:

(L96M2lines) shubham@Shubhams-MBP L96_demo % jcache cache list -p _build/.jupyter_cache
No Cached Notebooks

(L96M2lines) shubham@Shubhams-MBP L96_demo %

Comment

update environment to include tabulate

Split Neural network for L96

The neural network for L96 notebook is too long and should be split up. I see 3 discrete parts:

  • This notebook should focus on the simplest problem. What is section 1 (using NN for L96 parameterization).
  • There should be a separate notebook for non-local problems/deeper networks.
  • The remaining stuff (regularization, overfitting, choosing LR etc) could be put in a notebook called "advanced topics in NN training" or something similar.

Duplicate notebooks?

Is there a reason for the existence of gcm-analogue-copy.ipynb and gcm-parameterization-problem-copy.ipynb? These two notebooks just seem to be copies of older versions of gcm-analogue.ipynb and gcm-parameterization-problem.ipynb. Can we delete these duplicate notebooks or are they used anywhere? The table of contents suggests that we don't use them. But they take up time when building the jupyter book.

Create a notebook about L96 itself and its implementation

The L96 model currently lives in a standalone module (see #22). But the book (#15) does not have any explanation or text about this.

We need a notebook that simply describes the L96 system and shows how it's coded. This can probably be adapted from the slides.

Create an environment specification and configure CI

We need to clearly define the dependencies. Right now the README says

L96_demo/README.md

Lines 17 to 22 in 1e12f2c

## Required packages
- jupyter (for notebooks)
- numpy (used in computations)
- matplotlib (for plots)
- numba (for significant speed up)

But clearly there are others (pytorch).

We should create a conda environment specification (environment.yaml) that clearly spells out what is required.

Prepare for JOSE submission

We will submit this repo to JOSE. The submission requirements are detailed here.

Tasks:

  • Check if there is or add an OSI-approved license.
  • Add a short markdown paper: paper.md.
  • Reference for the above markdown should be added to paper.bib.

What should be in paper.md:

  • List all authors and affiliations.
  • Describe submission and explain eligibility for JOSE.
  • Have a statement of need section.
  • Describe the functionality or learning objectives.
  • Tell story of the project (how it came to be).
  • Cite key papers.
  • Should be about 1000 words or 2 pages.

Logical issues with estimating-gcm-parameters

@lesommer - The nice notebook have several issues that should be fixed before the book is out:
When you learn the parameters you rely on a single sample. This is true both for learning from the instantaneous differences and the trajectory learning. Furthermore, It would be great if you could learn from many samples, and verify if the result of the new parameters outperforms the prior parameterization for new initial condition (at the moment the test is done on the training data).

Data assimilation notebooks

Hi @feiyulu and @MitchBushuk and @William-gregory -

You had all shown some interest in improving the DA section of our demo (https://m2lines.github.io/L96_demo/notebooks/DA_demo_L96.html). Would it be possible for you all to self-organize and split the work around this? I am happy to discuss more on this as needed. I think Mitch has some slides, so some of the work might just be in converting from slides to notebook format. Maybe skimming the initial part of the machine learning section in the book, can help provide a sense of what the expectations of pedagogy are.

I think the goal is to try to and wrap everything up before the end of the month. I hope this is not too much of an ask.

L96-two-scale-description build errors

Issue:

Logs when we build the notebook:

ERROR: Execution Failed with traceback saved in /Users/shubham/Documents/workspace/L96_demo/_build/html/reports/L96-two-scale-description.log
/Users/shubham/Documents/workspace/L96_demo/pdf/README.md:1: WARNING: Non-consecutive header level increase; 0 to 2 [myst.header]
looking for now-outdated files... none found
pickling environment... done
checking consistency... /Users/shubham/Documents/workspace/L96_demo/README.md: WARNING: document isn't included in any toctree
/Users/shubham/Documents/workspace/L96_demo/fossil/README.md: WARNING: document isn't included in any toctree
/Users/shubham/Documents/workspace/L96_demo/notebooks/L96-description.ipynb: WARNING: document isn't included in any toctree
/Users/shubham/Documents/workspace/L96_demo/pdf/README.md: WARNING: document isn't included in any toctree
done

Screenshot 2023-03-29 at 10 45 42 AM

Fix:

WARNING: Non-consecutive header level increase

These warnings appear frequently when building the book. Markdown headings need to be used consistently, e.g.

# Title

## Heading 1

### Heading 2

etc.

We need to go through the notebooks and fix all of these issues so that the warnings go away.

Create binder environment

It would be great to be able to run these demos in My Binder. That would allow everyone to easily run the notebooks. Here are the steps we would need to take

  • make the repo public - does anyone have a problem with this?
  • add an envrionment.yaml file which specifies the dependencies required for the notebooks

Numba warnings

Problem

with the latest commit, there seems to be a new numba deprecation warning generated from methods in'L96_model and DA_methods file.

Logs

source: Learning Data Assimilation Increments notebook

/home/runner/work/L96_demo/L96_demo/notebooks/DA_methods.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def Lin3dvar(ub, w, H, R, B, opt):
/home/runner/work/L96_demo/L96_demo/notebooks/DA_methods.py:39: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def ens_inflate(posterior, prior, opt, factor):
/home/runner/work/L96_demo/L96_demo/notebooks/DA_methods.py:59: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def EnKF(prior, obs, H, R, B):
/home/runner/work/L96_demo/L96_demo/notebooks/L96_model.py:12: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def L96_eq1_xdot(X, F, advect=True):
/home/runner/work/L96_demo/L96_demo/notebooks/L96_model.py:37: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
  def L96_2t_xdot_ydot(X, Y, F, h, b, c):

Possible fix

Add parameter nopython=False in jit decorator

Will be addressed in the next 7 days

Internal Review Checklist

To claim a notebook, please reply to this issue and I will tag you for it.

Review Process

Each reviewer should do the following for their assigned notebook

  • Install the specified development environment (part of #11) on their local machine
  • Open their assigned notebook in jupyterlab and click "restart kernel and run all cells". The notebook should run from start to finish with no errors (warnings are fine).
  • If any errors are encountered, open a new github issue. The errors should be fixed either by the original author, the reviewer, or anyone else who sees how to fix it. The fix should be submitted via a pull request (see below for details).
  • Once the notebook is running without errors, we move to the stylistic review. Some aspects to consider
    • Is the notebook free from spelling and grammar errors?
    • Is the text clear, relatively free of jargon, and understandable to a general audience?
    • Does the notebook contain appropriate references, both to external papers and other notebooks within the book?
    • Are the figures clear and looking good?
  • The reviewer should feel free to directly edit the notebook to resolve any issues. Changes should be made via pull request.
  • Once all of the above are complete, check off the box below corresponding to the assigned notebook

Notebook Checklist

(Please do not check them off until the review is completed.)

How to make a pull request

We will follow these general instructions

  1. Click "fork" from github
  2. Clone your fork from your local computer
  3. Create a new branch for your feature and switch to it
  4. Make your changes and commit them
  5. Push your branch to your fork on GitHub
  6. Create a pull request vai GitHub

Generation of weights needs to happen first

The notebook Neural_network_for_Lorenz96.ipynb generates the file network_3_layers_100_epoches.pth, which is used by two downstream notebooks:

  • random_forest_parameterization.ipynb
  • LRP-L96.ipynb

Ideally we would make these notebooks execute after Neural_network_for_Lorenz96.ipynb. In order to do that, we would need to control the execution order within jupyter book.

Time-scale for climatology based error metric

The notebook https://m2lines.github.io/L96_demo/notebooks/gcm-parameterization-problem.html introduces the Climatology based error metric (equation 6). The subsequent figure shows the climatology of the truth vs. unparameterized GCM vs. parameterized GCM; it suggests that the climatology of the parameterized GCM (blue line) is closer to that of the true climatology (black line) than that of the umparameterized GCM (red line).

Issue

The climatologies were only computed for a time window of t = [0, 100], which is not long enough. Here is a figure that shows the computed climatology as a function of the time window. For short time windows (e.g., t = [0, 100]), the parameterized GCM (blue line) has indeed a time-mean that is closer to the truth. But for longer time scales, the climatology of the unparameterized GCM is better.

evolution_climatology_X4

I think we have to run the models in that notebook for longer to obtain reliable climatologies.

Equation discovery notebook

Just to follow up on our discussion last week, we decided to start with equation discovery as the next big section in the Jupyter book. I will list below a possible structure for us to iterate on .

Combine notebooks on L96 model

These 3 notebooks

  • L96-description
  • L96-two-timescale-description
  • presentation-model-setup

can be merged : one that describes the model equation and expand on why we are using the model ; and another one on how we solve the equations/numerics etc

Reconcile and combine DA notebooks

There are three DA related notebooks:

  1. DA_demo_L96.ipynb - the first demonstration of a DA system appleid to L96
  2. DA_demo_repeat.ipynb - a stripped down version of 1
  3. Learning-DA-increments.ipynb
  • Number 2 should be redundant but may have bug fixes that need applying to 1.
  • Number 2 currently generates the file increments.npz which is used in 3. Inserting 2 at the top of 3 would avoid the artifact but still have the duplication. I think this explains #42

Missing Jupyterbook title

I just noticed that the title of the book disappeared... I believe we had something like "Learning Machine Learning and Climate Modeling with Lorenz 96" or something like that. We can change that but we should definitely have a title.

Some minor paper.md edits

In performing final review/editing checks I've read through your paper. The basic structure and content flow are great, but there are some grammatical and syntactical inconsistencies that should be corrected before final acceptance and publication. In no particular order:

  • Inconsistency in the use of "ML" versus "machine learning"; since the acronym "ML" is defined in the first sentence, it would stand to reason that any subsequent reference be simply "ML".
  • No need for the "(described later)" parenthetical, as this is the core subject matter of your book.
  • Both "subgrid processes" and "sub-gridd processes" appear; pick one or the other.
  • "i.e." should always have periods after each letter (there's one in the second sentence that does not).
  • There's a missing Oxford comma in the second sentence of the second paragraph ("to simulate, understand and predict climate"); everywhere else, the Oxford comma is used, hence consistency would dictate using it here too.

Embed .py modules in notebooks

In #22 (now closed) it was pointed out that "we should try to eliminate the use of modules for anything other than L96 itself. This means LRP_gradient.py, DA_methods.py: all of this should be in notebooks, not modules."

  • In the case of LRP_gradient.py it looks like it is not actually used.
  • In the case of DA_methods.py, it is used in two notebooks but the second might be a repeat of the former - needs investigation.

gcm analog needs to be simplified

At the moment the gcm-analogue.ipynb has some text in the last two sections that is irrelevant to the main message. This should be either deleted or incorporated somewhere -> I think the errors discussion needs to go to the key_aspects notebook and the stochastic discussion needs a seperate notebook (if it is self contained).

Error in 08-Implementation/Neural%20Network%20Advection.ipynb

When I try to execute this notebook I get the following error in cell 43

# This might work well for F=20

gcm_nn4_x10 = GCM_network_S(Forcing_x10, nn_3l_loss4)
xnn4_x10, tnn4_x10 = gcm_nn4_x10(init_cond, dt, int(T / dt), nn_3l_loss4)


CompExps(
    [t2d_x10, x2d_x10, "2d"],
    [
        [tnn4_x10, xnn4_x10, "1d w/ rescaled NN mom."],
    ],
)
traceback
/var/folders/n8/63q49ms55wxcj_gfbtykwp5r0000gn/T/ipykernel_24547/2792372065.py:25: RuntimeWarning: overflow encountered in square
  a.plot(TN, np.sum(XN**2, axis=1), label=LN, linewidth=2)
/opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/numpy/core/fromnumeric.py:86: RuntimeWarning: overflow encountered in reduce
  return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
/var/folders/n8/63q49ms55wxcj_gfbtykwp5r0000gn/T/ipykernel_24547/2792372065.py:31: RuntimeWarning: overflow encountered in square
  _Y.append(np.percentile(np.sum(XN[int(5 // dt) :] ** 2, axis=1), ii))
/opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/numpy/core/fromnumeric.py:86: RuntimeWarning: overflow encountered in reduce
  return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
/opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/numpy/lib/function_base.py:4009: RuntimeWarning: invalid value encountered in subtract
  diff_b_a = subtract(b, a)
/opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/transforms.py:396: RuntimeWarning: overflow encountered in double_scalars
  return (x0, y0, x1 - x0, y1 - y0)
Error in callback <function install_repl_displayhook.<locals>.post_execute at 0x1101ac670> (for post_execute):
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/pyplot.py:138, in install_repl_displayhook.<locals>.post_execute()
    136 def post_execute():
    137     if matplotlib.is_interactive():
--> 138         draw_all()

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/_pylab_helpers.py:137, in Gcf.draw_all(cls, force)
    135 for manager in cls.get_all_fig_managers():
    136     if force or manager.canvas.figure.stale:
--> 137         manager.canvas.draw_idle()

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/backend_bases.py:2060, in FigureCanvasBase.draw_idle(self, *args, **kwargs)
   2058 if not self._is_idle_drawing:
   2059     with self._idle_draw_cntx():
-> 2060         self.draw(*args, **kwargs)

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/backends/backend_agg.py:436, in FigureCanvasAgg.draw(self)
    432 # Acquire a lock on the shared font cache.
    433 with RendererAgg.lock, \
    434      (self.toolbar._wait_cursor_for_draw_cm() if self.toolbar
    435       else nullcontext()):
--> 436     self.figure.draw(self.renderer)
    437     # A GUI class may be need to update a window using this draw, so
    438     # don't forget to call the superclass.
    439     super().draw()

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/artist.py:73, in _finalize_rasterization.<locals>.draw_wrapper(artist, renderer, *args, **kwargs)
     71 @wraps(draw)
     72 def draw_wrapper(artist, renderer, *args, **kwargs):
---> 73     result = draw(artist, renderer, *args, **kwargs)
     74     if renderer._rasterizing:
     75         renderer.stop_rasterizing()

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/artist.py:50, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
     47     if artist.get_agg_filter() is not None:
     48         renderer.start_filter()
---> 50     return draw(artist, renderer)
     51 finally:
     52     if artist.get_agg_filter() is not None:

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/figure.py:2810, in Figure.draw(self, renderer)
   2807         # ValueError can occur when resizing a window.
   2809 self.patch.draw(renderer)
-> 2810 mimage._draw_list_compositing_images(
   2811     renderer, self, artists, self.suppressComposite)
   2813 for sfig in self.subfigs:
   2814     sfig.draw(renderer)

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/image.py:132, in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
    130 if not_composite or not has_images:
    131     for a in artists:
--> 132         a.draw(renderer)
    133 else:
    134     # Composite any adjacent images together
    135     image_group = []

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/artist.py:50, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
     47     if artist.get_agg_filter() is not None:
     48         renderer.start_filter()
---> 50     return draw(artist, renderer)
     51 finally:
     52     if artist.get_agg_filter() is not None:

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/axes/_base.py:3082, in _AxesBase.draw(self, renderer)
   3079         a.draw(renderer)
   3080     renderer.stop_rasterizing()
-> 3082 mimage._draw_list_compositing_images(
   3083     renderer, self, artists, self.figure.suppressComposite)
   3085 renderer.close_group('axes')
   3086 self.stale = False

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/image.py:132, in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
    130 if not_composite or not has_images:
    131     for a in artists:
--> 132         a.draw(renderer)
    133 else:
    134     # Composite any adjacent images together
    135     image_group = []

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/artist.py:50, in allow_rasterization.<locals>.draw_wrapper(artist, renderer)
     47     if artist.get_agg_filter() is not None:
     48         renderer.start_filter()
---> 50     return draw(artist, renderer)
     51 finally:
     52     if artist.get_agg_filter() is not None:

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/axis.py:1158, in Axis.draw(self, renderer, *args, **kwargs)
   1155     return
   1156 renderer.open_group(__name__, gid=self.get_gid())
-> 1158 ticks_to_draw = self._update_ticks()
   1159 ticklabelBoxes, ticklabelBoxes2 = self._get_tick_bboxes(ticks_to_draw,
   1160                                                         renderer)
   1162 for tick in ticks_to_draw:

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/axis.py:1045, in Axis._update_ticks(self)
   1040 def _update_ticks(self):
   1041     """
   1042     Update ticks (position and labels) using the current data interval of
   1043     the axes.  Return the list of ticks that will be drawn.
   1044     """
-> 1045     major_locs = self.get_majorticklocs()
   1046     major_labels = self.major.formatter.format_ticks(major_locs)
   1047     major_ticks = self.get_major_ticks(len(major_locs))

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/axis.py:1277, in Axis.get_majorticklocs(self)
   1275 def get_majorticklocs(self):
   1276     """Return this Axis' major tick locations in data coordinates."""
-> 1277     return self.major.locator()

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/ticker.py:2114, in MaxNLocator.__call__(self)
   2112 def __call__(self):
   2113     vmin, vmax = self.axis.get_view_interval()
-> 2114     return self.tick_values(vmin, vmax)

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/ticker.py:2122, in MaxNLocator.tick_values(self, vmin, vmax)
   2119     vmin = -vmax
   2120 vmin, vmax = mtransforms.nonsingular(
   2121     vmin, vmax, expander=1e-13, tiny=1e-14)
-> 2122 locs = self._raw_ticks(vmin, vmax)
   2124 prune = self._prune
   2125 if prune == 'lower':

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/ticker.py:2078, in MaxNLocator._raw_ticks(self, vmin, vmax)
   2075     igood = (steps < 1) | (np.abs(steps - np.round(steps)) < 0.001)
   2076     steps = steps[igood]
-> 2078 istep = np.nonzero(steps >= raw_step)[0][0]
   2080 # Classic round_numbers mode may require a larger step.
   2081 if mpl.rcParams['axes.autolimit_mode'] == 'round_numbers':

IndexError: index 0 is out of bounds for axis 0 with size 0
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib_inline/backend_inline.py:43, in show(close, block)
     39 try:
     40     for figure_manager in Gcf.get_all_fig_managers():
     41         display(
     42             figure_manager.canvas.figure,
---> 43             metadata=_fetch_figure_metadata(figure_manager.canvas.figure)
     44         )
     45 finally:
     46     show._to_draw = []

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib_inline/backend_inline.py:231, in _fetch_figure_metadata(fig)
    228 # determine if a background is needed for legibility
    229 if _is_transparent(fig.get_facecolor()):
    230     # the background is transparent
--> 231     ticksLight = _is_light([label.get_color()
    232                             for axes in fig.axes
    233                             for axis in (axes.xaxis, axes.yaxis)
    234                             for label in axis.get_ticklabels()])
    235     if ticksLight.size and (ticksLight == ticksLight[0]).all():
    236         # there are one or more tick labels, all with the same lightness
    237         return {'needs_background': 'dark' if ticksLight[0] else 'light'}

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib_inline/backend_inline.py:234, in <listcomp>(.0)
    228 # determine if a background is needed for legibility
    229 if _is_transparent(fig.get_facecolor()):
    230     # the background is transparent
    231     ticksLight = _is_light([label.get_color()
    232                             for axes in fig.axes
    233                             for axis in (axes.xaxis, axes.yaxis)
--> 234                             for label in axis.get_ticklabels()])
    235     if ticksLight.size and (ticksLight == ticksLight[0]).all():
    236         # there are one or more tick labels, all with the same lightness
    237         return {'needs_background': 'dark' if ticksLight[0] else 'light'}

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/axis.py:1249, in Axis.get_ticklabels(self, minor, which)
   1247 if minor:
   1248     return self.get_minorticklabels()
-> 1249 return self.get_majorticklabels()

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/axis.py:1201, in Axis.get_majorticklabels(self)
   1199 def get_majorticklabels(self):
   1200     """Return this Axis' major tick labels, as a list of `~.text.Text`."""
-> 1201     ticks = self.get_major_ticks()
   1202     labels1 = [tick.label1 for tick in ticks if tick.label1.get_visible()]
   1203     labels2 = [tick.label2 for tick in ticks if tick.label2.get_visible()]

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/axis.py:1371, in Axis.get_major_ticks(self, numticks)
   1369 r"""Return the list of major `.Tick`\s."""
   1370 if numticks is None:
-> 1371     numticks = len(self.get_majorticklocs())
   1373 while len(self.majorTicks) < numticks:
   1374     # Update the new tick label properties from the old.
   1375     tick = self._get_tick(major=True)

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/axis.py:1277, in Axis.get_majorticklocs(self)
   1275 def get_majorticklocs(self):
   1276     """Return this Axis' major tick locations in data coordinates."""
-> 1277     return self.major.locator()

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/ticker.py:2114, in MaxNLocator.__call__(self)
   2112 def __call__(self):
   2113     vmin, vmax = self.axis.get_view_interval()
-> 2114     return self.tick_values(vmin, vmax)

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/ticker.py:2122, in MaxNLocator.tick_values(self, vmin, vmax)
   2119     vmin = -vmax
   2120 vmin, vmax = mtransforms.nonsingular(
   2121     vmin, vmax, expander=1e-13, tiny=1e-14)
-> 2122 locs = self._raw_ticks(vmin, vmax)
   2124 prune = self._prune
   2125 if prune == 'lower':

File /opt/miniconda3/envs/L96M2lines/lib/python3.9/site-packages/matplotlib/ticker.py:2078, in MaxNLocator._raw_ticks(self, vmin, vmax)
   2075     igood = (steps < 1) | (np.abs(steps - np.round(steps)) < 0.001)
   2076     steps = steps[igood]
-> 2078 istep = np.nonzero(steps >= raw_step)[0][0]
   2080 # Classic round_numbers mode may require a larger step.
   2081 if mpl.rcParams['axes.autolimit_mode'] == 'round_numbers':

IndexError: index 0 is out of bounds for axis 0 with size 0

Can I make this repo public?

I would like to start building our Jupyter Book automatically (see #15) and publishing it online. Our free GitHub account does not support GitHub pages on private repos. Therefore I would like to make this repo public. I see no reason not to. There is nothing secret or "in progress" afaict.

I would like approval to do this from all contributors, so I am tagging @m2lines/m2lines. Please give this comment a 👍 or 👎 to indicate your preference.

Conda fails to solve environment - Windows

Issue

When setting up the environment for building and executing the notebooks, I encountered a LibMambaUnsatisfiableError due to a conflict in environment.yaml.

Context

I uninstalled anaconda and performed a fresh install. Through the anaconda prompt, I opened the folder containing the local repository and attempted to create a new environment from environment.yaml as described in the README. conda threw an error, deeming the packages nomkl and pytorch incompatible due to pytorch requiring mkl. The full output of the command can be found in a pastebin here.

Workaround

I created an empty environment and manually conda installed each package in environment.yaml except torch and torchvision, which I instead pip installed. I then successfully built and executed the notebooks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.