Giter VIP home page Giter VIP logo

mui's Introduction

MUI - Multiscale Universal Interface

Concurrently coupled numerical simulations using heterogeneous solvers are powerful tools for modeling both multiscale and multiphysics phenomena. However, major modifications to existing codes are often required to enable such simulations, posing significant difficulties in practice. Here we present the Multiscale Universal Interface (MUI), which is capable of facilitating the coupling effort for a wide range of simulation types.

The library adopts a header-only form with minimal external dependency and hence can be easily dropped into existing codes. A data sampler concept is introduced, combined with a hybrid dynamic/static typing mechanism, to create an easily customizable framework for solver-independent data interpretation.

The library integrates MPI MPMD support and an asynchronous communication protocol to handle inter-solver information exchange irrespective of the solvers’ own MPI awareness. Template metaprogramming is heavily employed to simultaneously improve runtime performance and code flexibility.

In the publication referenced below, the library is validated by solving three different multiscale type problems, which also serve to demonstrate the flexibility of the framework in handling heterogeneous models and solvers associated with multiphysics problems. In the first example, a Couette flow was simulated using two concurrently coupled Smoothed Particle Hydrodynamics (SPH) simulations of different spatial resolutions. In the second example, we coupled the deterministic SPH method with the stochastic Dissipative Particle Dynamics (DPD) method to study the effect of surface grafting on the hydrodynamics properties on the surface. In the third example, we consider conjugate heat transfer between a solid domain and a fluid domain by coupling the particle-based energy-conserving DPD (eDPD) method with the Finite Element Method (FEM).

Licensing

The source code is dual-licensed under either the GNU General Purpose License v3 or Apache License v2.0, copies of both licenses should have been provided along with this source code.

Installation

MUI is a C++ header-only library with one dependency - an MPI implementation that supports the MPMD paradigm.

Wrappers are provided for C, Fortran and Python, these require compilation and therefore when using MUI with any of thee languages the library can no longer be considered header-only.

As a header-only library using MUI in your own source code is straight forward, there are two ways to utilise the library in this scenario:

  1. Include "mui.h" in your code and add appropriate paths to your compiler, if you wish to utilise a wrapper then go to the /wrappers folder and utilise the Makefile build system in each to generate compiled libraries to link against, any associated header files are also located here.
  2. (preferred) Utilise the provided CMake build files to create a local or system-wide installation of the library. In this case there are a number of CMake parameters you should consider:
    1. CMAKE_INSTALL_PREFIX=[path] - Set the path to install the library, otherwise the system default will be used
    2. CMAKE_BUILD_TYPE=Release/Debug/.. - Set the compilation type (only changes options for compiled wrappers)
    3. C_WRAPPER=ON/OFF - Specifies whether to compile the C wrapper during installation
    4. FORTRAN_WRAPPER=ON/OFF - Specifies whether to compile the Fortran wrapper during installation
    5. PYTHON_WRAPPER=ON/OFF - Specifies whether to compile and install the Python wrapper during installation, relies on a working Python3 toolchain and uses pip

Publication

Tang Y.-H., Kudo, S., Bian, X., Li, Z., & Karniadakis, G. E. Multiscale Universal Interface: A Concurrent Framework for Coupling Heterogeneous Solvers, Journal of Computational Physics, 2015, 297.15, 13-31.

Contact

Should you have any question please do not hesitate to contact the developers, a list can be found within the MxUI about page.

Examples

Computational Fluid Dynamics (CFD) - Finite Element (FEM) Fluid Structure Interaction
Finite Element (FEM) - Dissipative Particle Dynamics (DPD) Conjugate Heat Transfer
Dissipative Particle Dynamics (DPD) - Smoothed Particle Hydrodynamics (SPH) flow past a polymer-grafted surface

mui's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mui's Issues

bug in sampler_mov_avg.h

line 36
point_type dx = apply( data_points[i].first - focus, abs );

change to
point_type dx = apply( data_points[i].first - focus, fabs );

Connecting MUI applications with different starting commands

How can two MUI-enabled applications connect with each other if they are not started at the same time (with same mpirun command)?

Second question would be: if one has a MUI-enabled application, is it possible to connect with some other application that is extended by a dynamically loaded plug-in (containing MUI communication code)?

Possible bug with fetch_points/fetch_values

I have been trying to adapt example 8 (fetchall.cpp) to my purposes, but found the following strange behaviour. When running the code below with mpirun -np 1 ./fetchall mpi://domain1/ifs 0.618 : -np 1 mpi://domain2/ifs 1.414 it will crash with:

terminate called after throwing an instance of 'std::length_error'
  what():  basic_string::_M_create

However, it works OK for smaller amounts of data (e.g. n=1000)

Code (minimum example based on fetchall.cpp) compiled with g++10, mpich and using same Makefile from fetchall example

#include "../mui/mui.h"

int main( int argc, char ** argv ) {
    if ( argc < 3 ) {
        printf( "USAGE: mpirun -np n1 %s URI1 value1 : -np n2 %s URI2 value2\n\n"
                "n1, n2     : number of ranks for each 'subdomain'\n"
                "URI format : mpi://domain-identifier/interface-identifier\n"
                "value      : an arbitrary number\n\n"
                "EXAMPLE: mpirun -np 1 %s mpi://domain1/ifs 0.618 : -np 1 %s "
                "mpi://domain2/ifs 1.414\n\n",
                argv[0], argv[0], argv[0], argv[0] );
        exit( 0 );
    }

    mui::uniface3d interface( argv[1] );

    if (std::string(argv[1]) == "mpi://domain1/ifs")
    {
      int n = 3000;
      std::vector<mui::point3d> push_locs( n );

      printf( "domain %s pushed %d values %s\n", argv[1], n, argv[2] );

      // Push value stored in "argv[2]" to the MUI interface                                                                                                                                             
      for ( size_t i = 0; i < n; i++ ) { // Define push locations as 0-5 and push the value                                                                                                              
        push_locs[i] = { i*2.1, i/10.0, i/20.0};
        interface.push( "data", push_locs[i], atof( argv[2] ) );
      }

      // Commit (transmit by MPI) the values at time=0                                                                                                                                                   
      interface.commit( 0 );
    }
    else
    {
      // Fetch the values from the interface using the fetch_points and fetch_values methods                                                                                                             
      // (blocking until data at "t=0" exists according to chrono_sampler)                                                                                                                               
      int time = 0;

      mui::chrono_sampler_exact1d chrono_sampler;
      std::vector<mui::point3d> fetch_locs = interface.fetch_points<double>("data", time, chrono_sampler ); // Extract the locations stored in the interface at time=0                                   
      std::vector<double> fetch_vals = interface.fetch_values<double>("data", time, chrono_sampler ); // Extract the values stored in the interface at time=0                                            

      // Print returned values                                                                                                                                                                           
      for ( size_t i = 0; i < fetch_locs.size(); i++ ) 
        printf( "domain %s fetched value %lf at location %lf\n", argv[1], fetch_vals[i], fetch_locs[i][0] );
      
    }
    
    return 0;
}

trasfering array of string from fortran into cpp

interface 
    !Set of 1D interfaces with float=double and int=int32
    !Recomend to use the create_and_get_uniface_multi_1d_f(*) subroutine
    ! instead of use this subroutine directly
    subroutine mui_create_uniface_multi_1d_f(domain, interfaces, &
        interface_count) bind(C)
      import :: c_char,c_int
      character(kind=c_char), intent(in) :: domain(*)
      character(kind=c_char,len=*), intent(in) :: interfaces(*)
      integer(kind=c_int), value :: interface_count
    end subroutine mui_create_uniface_multi_1d_f
.....
end interface


    subroutine create_and_get_uniface_multi_1d_f(uniface_pointers_1d, domain, interfaces, &
      interface_count)
      use, intrinsic :: iso_c_binding
      implicit none

      type(ptr_typ_1d), target :: uniface_pointers_1d(:)
      character(kind=c_char), intent(in) :: domain(*)
      character(kind=c_char,len=*), intent(in) :: interfaces(*)
      integer(kind=c_int), VALUE :: interface_count
      integer :: i

      call mui_create_uniface_multi_1d_f(domain, interfaces, &
        interface_count)

      do i = 1, interface_count
        uniface_pointers_1d(i)%ptr = get_mui_uniface_multi_1d_f(i)
      end do
    end subroutine create_and_get_uniface_multi_1d_f

compiler reported error for character(kind=c_char,len=*), intent(in) :: interfaces(*)

I made some modification:

For the interface

subroutine mui_create_uniface_multi_1d_f(domain, interfaces, &
    interface_count) bind(C)
  import :: c_char,c_int,c_ptr
  character(kind=c_char), intent(in) :: domain(*)
  type(c_ptr),DIMENSION(*) :: interfaces
  integer(kind=c_int), value :: interface_count
end subroutine mui_create_uniface_multi_1d_f

In create_and_get_uniface_multi_1d_f, modify

character(kind=c_char, len=*),  target, intent(in) :: interfaces(*)
type(c_ptr) :: c_interfaces(interface_count)

do i=1, interface_count
  c_interfaces(i) = c_loc(  interfaces(i) )
enddo

call  mui_create_uniface_multi_1d_f(domain, c_interfaces, interface_count)

fortran wrapper type error??

void mui_forget_upper_3d_f(mui_uniface_3d *uniface, double *upper, int *reset_log) {
    uniface->forget(*upper, static_cast<bool>(*reset_log));
}

In its fortran interface

    subroutine mui_forget_upper_3d_f(uniface,upper,reset_log) bind(C)
      import :: c_ptr,c_int,c_double
      type(c_ptr), intent(in), value :: uniface
      real(kind=c_double), intent(in), value :: upper
      integer(kind=c_int), intent(in), value :: reset_log
    end subroutine mui_forget_upper_3d_f

For arguments upper and reset_log, is the property value is right? since they are pointers in cpp?

Issue with Nearest Neighbour spatial filter

I have an issue with the nearest neighbour spatial filters. The filter does not pick up properly points on a boundary even if the two meshes are conformal.
In the following example i have a square with a 9x9 mesh. In the left and bottom boundaries points are not selected correctly.
ResuPoint_SameMesh
A simple way to resolve the problem was to change the support geometry for the from point

inline geometry::any_shape<CONFIG> support( point_type focus ) const {
	return geometry::point<CONFIG>( focus );
}

to sphere

inline geometry::any_shape<CONFIG> support( point_type focus ) const {
            return geometry::sphere<CONFIG>( focus, 1.e-2 );
}

This is now the result
ResuSphere_SameMesh

Note that the tolerance that i have used for the sphere is the same that i passed to the original NN filter based on point.
I have a code to test the problem where a scalar field is passed between two solvers. i develop the example as an extra example for the MUI-demo.

Add unit tests for Python wrapper.

There is currently no testing for the Python wrapper. Once issue #75 is dealt with and we have a python package, I propose using pytest as the framework to write the tests which will go under package_name/tests folder.

Please, guide for how to install MUI in Lammps

Dear Mr. SLongshaw

I am very beginning for simulation. Now I am studying on multiscale simulation, especially using Lammps. Hope you have time to write an instruction. Thank you very much for helping.

Better packaging for Python wrapper.

Currently the way of building the bindings for Python is quite rudimentary through a Makefile. It would be better to create a python package the user can build and install or even upload it to The Python Package Index so the user does not need to build it at all.

accelerating spatial samplers

The spatial samplers are all currently done as single threaded function call on the CPU, there might be scope for accelerating the majority for many-core architectures given they basically all loop over a set of points.

Immediate issues:

  1. Memory transfer - the cell list is built for each data frame, so either that or the individual "data_points" subset found from the cell list would need to be transferred - this will be costly and hard to hide, might be scope for using CUDA MPI type approach.
  2. The calls to filter() usually work on small subsets (< 50 points) - these would need to be bundled up and run in parallel to make the most of a GPU.
  3. Ideally a solution should be hardware agnostic, so should be focused either on low-level like OpenCL or higher-level like SYCL type approach.

Calling mpi_split_by_app with a single app

In lib_mpi_split.h, the function mpi_split_by_app will crash with an unhelpful error (SIGSEGV) if not run in MPMD mode. I implemented a version (see my branch https://github.com/rupertnash/MUI/tree/fix-MPI_APPNUM-missing) that will return MPI_COMM_WORLD in the case of being run as a SPMD program.

I'm not sure if this is in fact the correct approach. It may be better to abort with a sensible message to the user. E.g.:

if (flag) {
  // MPI_Comm_split etc...
  return domain;
} else {
  std::cerr << "Calling mui::mpi_split_by_app with only a single app is erroneous" << std::endl;
  MPI_Abort(MPI_COMM_WORLD, 1);
}

Can you please let me know what fits better with the "MUI philosophy"!
Cheers!

MUI Coupling with the Lattice Boltzmann solver Palabos freezes

I am trying to use MUI with Palabos as one of the solver applications. Palabos uses MPI to parallelize computations and the application is written completely in C++. I find that whenever a Palabos parallel functional is being executed, the program freezes.

Fortran wrapper test mixes comm arguments

In the fortran wrapper unit test the argument taken from the command line seems to be confused. That is

call getarg(2, arg)

should probably be

call getarg(1, arg)

Also, subsequently trimming of the argument is necessary as otherwise trailing spaces confuse MPI name. That is my suggestion would be to add trim:

call mui_create_uniface3d_f(uniface, trim(arg))

This still does not fix the issue as then mpirun ends with invalid reference, which may be due to C-Fortran difficult relationship rather than MUI itself.

Store the mesh connectivity information for mesh based solvers

MUI (up to V1.1.3) uses points of clouds for both mesh-based solvers and particle-based solvers. That may bring overhead for mesh-based solvers, as the mesh connectivity information doesn’t store and we have to loop over the mesh at each push/fetch.
It might be useful to minimise the overhead of using MUI by storing the mesh connectivity information for mesh-based solvers.

improve generality of RBF spatial filter

The RBF spatial filter currently assumes the use of a fixed point cloud based on the initial locations used to build the H matrix, it is possible to update this as you go but not rebuild it directly.

To make the filter more generally applicable we will need to:

  1. Make rebuilding the matrix faster.
  2. Explore how to change the matrix point location lookup from location based to index based.
  3. Create a trigger function for when to rebuild the matrix.

Add interpolation filter at the sending side

Currently (up to MUI V1.1.3), the interpolation is done on the receiving side through samplers. It might be useful to add the interpolation in the sending side so that to minimise the overhead of sending over all the points from the sending side but the receiving side only needs a part of them.

Setting up continuous integration server

Following our conversation I would like to set up a minimal continuous integration server for us. I will do a bit of research in the coming days on the best strategy to do this.

The problem I see, straight away is that I don't want to break minimality of MUI, but any kind of testing will require to create executables. I will study now how other header only library do this. Perhaps they have a minimal Make/CMake/autoconf to create tests separately.

In one of my older projects I have tests compile as an option. If the user wanted test to compile gtest and benchmark were downloaded. This is also something to consider. @yhtang @SLongshaw please let me know if you have any thoughts on it.

Expect updates on this and a pull request from my fork.

Cmake shouldn't specify FORTRAN unless needed

At the moment, the first line of CMakeLists.txt specifies FORTRAN (and C) as requirements.

These should be added later, inside if(FORTRAN_WRAPPER) e.g. using enable_language(FORTRAN)
Otherwise, the install will fail on systems with no fortran available...

Fix endianness traits

Hi @SLongshaw

I saw that your reverted the endian traits patch in #14 . I'd like to get this accepted because without it I can't compile MUI on my laptop.

Can you please give me a bug report? Then I have a chance to fix it!

I tested initially with clang (Apple 10.0 / upstream 6) and gcc/6.3.0, but have just tried on Cirrus (http://www.cirrus.ac.uk) with all the GCC versions available and not had problems. (gcc/6.2.0 gcc/6.3.0(default) gcc/7.2.0 gcc/8.2.0)

Cheers

polymorphic uniface -- ideas / opinions wanted

Would it be useful to define a uniface base class that is non-templated, and contains virtual member functions? Then the templated uniface inherits from the base class. This would allow polymorphism, like so:

Interface* inf = 0;
 
if (dim==1)
        inf = new uniface<config1d> ();
else if (dim==2)
        inf = new uniface<config2d> ();
 else if (dim==3)
        inf = new uniface<config3d> ();

inf->push(...)
inf->commit(...)    //works independent of interface dimension

We would perhaps need a constructor on point that takes three arguments so a 1D interface can always accept a point at (x, 0, 0) for instance. And the samplers would similarly need an abstract base class.

Not sure if this would be useful, or not.

The alternative for a generic dimensional code seems to be to have inf1d, inf2d and inf3d pointers and wrap everything MUI related in if(dim=1), etc. which seems messy.

configure/install as header-only

General wishlist and/or misc notes. No immediate action (or any action) required.

looked a bit at the interface while trying to understand some integration code. From what I can see, in the non-fortran case and others the MUI code will be installed as a header only configuration.
In this case, the header is in fact independent of the MPI vendor and version. So following snippet is actually irrelevant:

find_package(MPI REQUIRED)
if(MPI_FOUND)
        include_directories(SYSTEM ${MPI_INCLUDE_PATH})
elseif(NOT MPI_FOUND)
        message(SEND_ERROR "MPI not found")
endif(MPI_FOUND)

Not sure what exactly the conditions would be, but seems like having

install(TARGETS MUI EXPORT muiTargets INCLUDES DESTINATION include LIBRARY DESTINATION lib)

might be able to change to this in that case:

install(TARGETS MUI EXPORT muiTargets INCLUDES DESTINATION include)

Not really sure if the "*.f90" install pattern is relevant for non-fortran.

  • some of the includes (eg, "config.h") have a very generic naming and may result in include conflicts with other packages
  • could be useful to have a MUI_VERSION define in the mui header. Would allow this type of code:
    #ifdef HAVE_MUI_DETECTED   // eg, autoconfig, cmake, some manual means
    #include "mui.h"
    #endif
    
    ... later in the code
    #if (MUI_VERSION > xxx)
       some mui stuff
    #endif 
    
    This would allow a nice separation of the regular detection defines and ones related to MUI itself (ie, is known and has a particular min API level). I would personally opt for a linear API number like boost (or OpenFOAM) since these are really easy to test for without needing extra macros. Eg
    //  BOOST_VERSION % 100 is the patch level
    //  BOOST_VERSION / 100 % 1000 is the minor version
    //  BOOST_VERSION / 100000 is the major version
    #define BOOST_VERSION 106600
    

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.