Giter VIP home page Giter VIP logo

micro-manager's Introduction

preCICE

Project Status
preCICE website status Release Cite preCICE distribution Build status System tests

Project Quality
xSDK Policy Compatibility CII Best Practices CodeFactor CodeQL codecov Coverity

Community
Join the forum Chat on Matrix Twitter Mastodon YouTube

preCICE stands for Precise Code Interaction Coupling Environment. Its main component is a library that can be used by simulation programs to be coupled together in a partitioned way, enabling multi-physics simulations, such as fluid-structure interaction.

If you are new to preCICE, please have a look at our documentation and at precice.org. You may also prefer to get and install a binary package for the latest release (main branch).

preCICE overview

preCICE is an academic project, developed at the Technical University of Munich and at the University of Stuttgart. If you use preCICE, please cite us.

micro-manager's People

Contributors

erikscheurer avatar ishaandesai avatar mrogowski avatar scriptkiddi avatar uekerman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

micro-manager's Issues

Add pictorial description of macro-micro coupling to README

Currently the README has only text which describes the concept and working of the Micro Manager. In addition to the provided information, a pictorial description of the macro-micro coupling strategy would be beneficial for understanding the core concept of the Micro Manager.

Addressing file duplication in `examples/*-dummy`

In the folders examples/python-dummy and examples/cpp-dummy, the file run_micro_manager.py exists. In principle this file can be put in examples/ and used for both applications. But this would mean that the Python and CPP dummy need to have their own JSON configuration files, because the name of micro code file is the same. If only one run_micro_manager.py is to exist, we would end up several configuration files:

  • micro-manager-config-cpp.json
  • micro-manager-config-python.json
  • micro-manager-adaptivity-config-cpp.json
  • micro-manager-adaptivity-config-python.json

Also, merging micro-manager-config-*.json and micro-manager-adaptivity-config-*.json is not possible because the adaptivity variant is tested in GitHub Actions.

@uekerman thoughts?

Unable to write initial data to preCICE for macro-micro coupled scenarios

Describe your setup

Operating system (e.g. Linux distribution and version): Ubuntu 22.04
preCICE STATE: precice/precice@8393841

Add the versions/commit hashes of further adapter and/or bindings.

Micro Manager: 6dcbf68

Describe the problem

Although the Micro Manager uses the Python API, the problem is not related to it and is hence described below in terms of the original C++ API.

The Micro Manager does not need a coupling mesh of its own, but just needs the vertex information of the macro-scale participant. Using getMeshVertexIDsAndCoordinates(), it gets the vertex IDs and coordinates of the macro-scale participant mesh. It then defines one micro simulation per vertex of this mesh. Then it checks if the micro simulation objects have a function called initialize() and if they do, then it calls this function to get the initial state. Then it attempts to write the initial state to preCICE by checking requiresInitialData() and then initialize().

The problem is that the function getMeshVertexIDsAndCoordinates() cannot be run before initialize(). But without running it, the Micro Manager does not know how many micro simulations to create, and hence it cannot get their initial states.

This problem was not encountered before because the functions initialize() and initializeData() were separate. Back then the order of calling functions was:

  1. initialize()
  2. getMeshVertexIDsAndCoordinates()
  3. isWriteDataRequired()
  4. writeData...
  5. initializeData()

Step To Reproduce

https://github.com/precice/micro-manager/actions/runs/5825484943/job/15797159769?pr=51

Expected behaviour

Currently I do not see a way that the Micro Manager can get the mesh information from the macro-scale participant and also give initial data of the micro simulations to preCICE. One solution is that we do not allow micro simulations to write initial data to preCICE, but I have to study how this would affect macro-micro simulations.

Opening the issue to discuss if I am looking over something rather obvious.

Handle crashing of micro simulations in a proper way

Currently when the Micro Manager is controlling and running micro simulations, if one simulation crashes or has an improper exit, the Micro Manager run just hangs. It is nearly impossible to know which micro simulation crashed, and why the overall execution has hanged.

The Micro Manager should be able to handle a simulation crash by ...

  • ... continuing to run the rest of the micro simulations.
  • ... logging the simulation crash in the log file.
  • ... providing the macro location at which the simulation crashed.
  • ... creating a new log file and parsing the error output from the crashed simulation to it.

It is not clear if all these things could be done, so initially some investigations are necessary.

Call micro simulation solve routine with multiple CPUs

In case of a large micro simulation (in the orders of million elements or degrees of freedom at each Gauss point of the macro problem) or even multiple scales for a single solve call, access to multiple CPUs will be required for undertaking the analysis. Can we give each solve() call access to multiple CPUs?

Does the micro problem API need a class structure?

Right now we ask the user to formulate the micro problem in a class structure, with a class name MicroSimulation which then has functions titled initialize, solve, etc. Is such a class structure even necessary? The Micro Manager can create a class and add functions to it.

Version incompatibility

The latest release version for libprecice is v2.5.0, so based on the documentation on precice website, I can only install pyprecice=v2.5.0 because of its dependency on libprecice. And micromanager depends on pyprecice>=v3.0dev2. How can this incompatibility be addressed?

Compatability issue deepcopy and pybind?

A Dumux-Dumux coupled simulation (see https://github.com/HelenaKschidock/macro-micro fails with current develop branch. Pybind was built locally using pip (after micro-manager installation).

(0) 21:58:31 [cplscheme::BaseCouplingScheme]:235 in advance: Time window completed
(0) 21:58:31 [impl::SolverInterfaceImpl]:464 in advance: iteration: 1 of 30, time-window: 2, time: 0.005 of 0.25, time-window-size: 0.005, max-timestep-length: 0.005, ongoing: yes, time-window-complete: yes, write-iteration-checkpoint 
Saving state of micro problem (0)
Saving state of micro problem (1)
Saving state of micro problem (2)

[...]

Saving state of micro problem (126)
Saving state of micro problem (127)
Traceback (most recent call last):
  File "/home/kschidock/FM/dumux2/dumux/dumux-adapter/build-cmake/examples/macro-micro/micro-heat/run_micro_manager.py", line 11, in <module>
    manager.solve()
  File "/home/kschidock/.local/lib/python3.10/site-packages/micro_manager/micro_manager.py", line 505, in solve
    similarity_dists, micro_sim_states = self.compute_adaptivity(similarity_dists, micro_sim_states)
  File "/home/kschidock/.local/lib/python3.10/site-packages/micro_manager/micro_manager.py", line 391, in compute_adaptivity
    micro_sim_states_n = self._adaptivity_controller.update_inactive_micro_sims(
  File "/home/kschidock/.local/lib/python3.10/site-packages/micro_manager/adaptivity.py", line 168, in update_inactive_micro_sims
    micro_sims[i] = deepcopy(micro_sims[associated_active_id])
  File "/usr/lib/python3.10/copy.py", line 161, in deepcopy
    rv = reductor(4)
TypeError: cannot pickle 'MicroProblem' object

Domain decomposition according to the number of nodes

Currently the micro_manager splits the domain on a mpi run according to the geographic macro_bounds, instead of the number of simulations contained in each section.
As the micro simulations are independent from each other, to allow an optimal resource usage the micro manager should allocate the micro simulations evenly across used cpu cores.

Provide functionality to pass a quantities to the micro simulations during initialization

For certain cases, a user may want to set the initial state of a micro simulation based on macro information, like, for example, the macro location of the micro simulation.

@mathiskelm postulates such a scenario: Say I know (roughly) the heterogeneity of porosity in my macro domain, e.g. a simple space-dependent $ϕ = 0.5 + 0.01 y$ in a domain $[0,10]^2$. Then I would like the micro-sims that correspond to a macro $y = 0$ to have a different initial state to ones at macro $y = 10$.

The Micro Manager could give the user options between several quantities that it has, to pass to the micro simulation during initialization. Macro coordinates could be one of them. The set of quantities would be fixed and would be configured via the JSON config file.

Compatibility of dummies with command line

The Micro Manager can be called from the command line with micro_manager. Currently the solverdummies are not compatible with this call as there are different solvers for python and c++.
Is this something we want to change or just clarify the usage in the documentation?

Add functionality to allow the Micro Manager to initialize micro simulations from different file/solver sources simultaneously

Currently the Micro Manager accepts one micro simulation file (with Python or C++ code) and initializes all micro simulation objects with it. In certain applications, a user may want to initialize more than one micro simulations simultaneously. For example, at some macro locations, high fidelity micro simulations may be necessary, while at other macro locations, low fidelity micro simulations would be sufficient.

As a first step, such a dynamic setup can be done at the time of initialization, and it will not change during the simulation.

Check package deployed on PyPI via CI

Currently there is no check if the package deployed to PyPI actually works. Such a check should be done once a package is deployed after a release. Running just the solver dummy would be sufficient.

Allow user to provide comparison function for similarity calculation in adaptivity

Currently the adaptivity calculates similarity between two simulations based on an absolute difference:

for d in range(dim):
data_diff += abs(data[counter_1, d] - data[counter_2, d])
else:
data_diff = abs(data[counter_1] - data[counter_2])

Instead of doing an absolute difference, the user can provide a function of their choice to use in the similarity computation. Alternatively the manager can provide various options for the user to pick from.

Adapt the similarity condition to the residual of the fixed-point problem

Generally, we postulate that the micro simulations takes up most of the computing time when we are using the Micro Manager for multiscale simulations with large number of micro simulations.

As Delaisse's paper indicated, we could

achieve a significant speed-up by not converging to the final sub-problem tolerance in each solver call.

Although we are not supposed to control the tolerance of sub-problems (like what Nicolas did in the paper) in preCICE, we could control the numbers of micro simulations through the Micro Manager to get more coarse or fine solution of the whole field. That means, we could use loose similarity conditions to solve less micro problems and thus get coarse result and use strict similarity condition when we are close to the convergence.

Based on this idea, we need to discuss following points:

  • If we are going to adapt the similarity condition to the fixed-point tolerance from Quasi-Newton method and we only do QN computation at the end of each time window, we would also update the similarity condition in each time window between the implicit iterations, and restart from the most loose condition at each new time window.
  • This method is only available for implicit coupling.
  • How should we select the initial (relative) similarity condition and how do we rationally adapt the condition according to the fixed-point residual?
  • Can we retain the accuracy of the original method under the assumption that the total Newton-iterations in sub-problems would reduce?

Allow micro simulations to pass data to be used only in the adaptivity calculation

Currently micro simulations are only allowed to pass data which is to be sent to the macro side. However, for the adaptivity calculation, the micro simulations may have data related to their state which is suitable for calculating the similarity distance. There should be an option for such data to be passed on to the Micro Manager.

pyproject.toml instead of setup.py

The setup.py is deprecated and pyproject.toml is the recommended approach. Since this package does not have any complicated build setup, converting to pyproject.toml should be very easy to accomplish.

Micro manager should be able to control micro simulations written in C++

Currently the Micro Manager is constructed in a way that is can only control micro simulations written in Python. The user gives a path to a micro simulation script written in Python, in the manager configuration file. However it can be the case that a user has a micro simulation script written in some other language like C++, C of even Fortran. For the time being the manager can be updated in a way that scripts written in C and C++ can be used. Both C and C++ scripts cannot be used directly, and in most cases will be built into a shared library (.so) file. The Micro Manager needs to take the shared library file and run the appropriate functions.

For micro simulations written in C++, pybind11 was the tool used in #22.

Improve domain decomposition strategy

Currently the domain decomposition is done in way that the macro domain is partitioned in the X and Y directions as optimally as possible. No partition is done in the Z direction for 3D cases. This partitioning is not optimal, and the domain needs to be partitioned in the Z direction too.

In addition to the optimal partitioning strategy, the user can also be given a choice of partitions in X, Y and Z directions. Such a choice would be configured through the JSON configuration file.

Error in unit tests with v0.3.0

I am seeing the below errors when running the units.

======================================================================
ERROR: test_communicate_micro_output (test_adaptivity_parallel.TestGlobalAdaptivity)
Test functionality to communicate micro output from active sims to their associated inactive sims, for a global adaptivity setting.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/bharatmedasani/Software/reviews/micro-manager-0.3.0/tests/unit/test_adaptivity_parallel.py", line 115, in test_communicate_micro_output
    adaptivity_controller.communicate_micro_output(is_sim_active, sim_is_associated_to, sim_output)
  File "/Users/bharatmedasani/miniconda3/envs/tmp38/lib/python3.8/site-packages/micro_manager/adaptivity/global_adaptivity.py", line 153, in communicate_micro_output
    recv_reqs = self._p2p_comm(assoc_active_ids, micro_output)
  File "/Users/bharatmedasani/miniconda3/envs/tmp38/lib/python3.8/site-packages/micro_manager/adaptivity/global_adaptivity.py", line 316, in _p2p_comm
    req = self._comm.irecv(bufsize, source=recv_rank, tag=tag)
  File "mpi4py/MPI/Comm.pyx", line 1502, in mpi4py.MPI.Comm.irecv
  File "mpi4py/MPI/msgpickle.pxi", line 417, in mpi4py.MPI.PyMPI_irecv
mpi4py.MPI.Exception: MPI_ERR_RANK: invalid rank

======================================================================
ERROR: test_update_inactive_sims_global_adaptivity (test_adaptivity_parallel.TestGlobalAdaptivity)
Test functionality to update inactive simulations in a particular setting, for a global adaptivity setting.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/bharatmedasani/Software/reviews/micro-manager-0.3.0/tests/unit/test_adaptivity_parallel.py", line 70, in test_update_inactive_sims_global_adaptivity
    is_sim_active, sim_is_associated_to = adaptivity_controller._update_inactive_sims(
  File "/Users/bharatmedasani/miniconda3/envs/tmp38/lib/python3.8/site-packages/micro_manager/adaptivity/global_adaptivity.py", line 230, in _update_inactive_sims
    recv_reqs = self._p2p_comm(list(to_be_activated_map.keys()), sim_states_and_global_ids)
  File "/Users/bharatmedasani/miniconda3/envs/tmp38/lib/python3.8/site-packages/micro_manager/adaptivity/global_adaptivity.py", line 316, in _p2p_comm
    req = self._comm.irecv(bufsize, source=recv_rank, tag=tag)
  File "mpi4py/MPI/Comm.pyx", line 1502, in mpi4py.MPI.Comm.irecv
  File "mpi4py/MPI/msgpickle.pxi", line 417, in mpi4py.MPI.PyMPI_irecv
mpi4py.MPI.Exception: MPI_ERR_RANK: invalid rank

----------------------------------------------------------------------

Handle cases where some ranks do not have any micro simulations

Currently the manager requires that every rank must at least one micro simulation. However, this may not be the case for bigger simulations. The manager should be able properly handle empty ranks. Partial support for handling empty ranks was introduced before, but the new adaptivity functionality has removed the support again.

Add CI to check test coverage

It would be good to check test coverage for every change via a pull request. As the Micro Manager is written in Python, any solution would involve using Coverage.py. There are ways to automate this in GitHub Actions via existing actions or Codecov, but it seems easier to implement a simple Python script which computes the test coverage and checks if it is above or below a predefined threshold.

Invalid value in division warning in adaptivity functionality

For some cases, the following warning is observed from the file micro_manager/adaptivity/adaptivity.py:

RuntimeWarning: invalid value encountered in true_divide
  relative = np.nan_to_num((pointwise_diff / np.maximum(data[np.newaxis, :], data[:, np.newaxis])))

This needs to be investigated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.