opm / ifem Goto Github PK
View Code? Open in Web Editor NEWIFEM - Isogeometric Toolbox for the solution of PDEs
License: Other
IFEM - Isogeometric Toolbox for the solution of PDEs
License: Other
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<simulation>
<!-- General - geometry definitions !-->
<geometry>
<raiseorder patch="1" u="1" v="1" w="1"/>
<refine type="uniform" patch="1" u="7" v="7" w="7"/>
<topologysets>
<set name="Boundary" type="face">
<item patch="1">1 2 3 4 5 6</item>
</set>
</topologysets>
</geometry>
<!-- General - boundary conditions !-->
<boundaryconditions>
<dirichlet set="Boundary" comp="1"/>
</boundaryconditions>
<!-- Problem-specific block !-->
<poisson>
<source type="expression">1</source>
</poisson>
</simulation>
Running this with and without the -LR
flag gives different norm evaluations.
>>> Solution summary <<<
L2-norm : 0.0190837
Max displacement : 0.0562307 node 545
Energy norm a(u^h,u^h)^0.5 : 0.141993
External energy (h,u^h)^0.5 : 0.141993
>>> Solution summary <<<
L2-norm : 0.0190837
Max displacement : 0.0562307 node 557
Energy norm a(u^h,u^h)^0.5 : 0.00627525
External energy (h,u^h)^0.5 : 0.00627525
There is flawed logic where the adaptivity requires the existence of a projection, but what it should be requiring is the existence of some norm evaluation.
AdaptiveSIM::initAdaptor() is at fault.
Reproducing files here: https://gist.github.com/TheBB/33cc23e2d6127bcb15fae21c5a2f5daf. There are four: 02.lr
, 03.lr
, snapshot.xinp
and test.py
.
It's a simple beam elasticity case that I run with:
LinEl snapshot.xinp -2D -LR
Then the dumped matrices are checked with the test.py
script. Run with both 02.lr
and 03.lr
as mesh. The residual for the finer mesh will be much higher, even though the solutions themselves look identical. I'm wondering why that is. These are two meshes out of a chain of increasing resolution, and it's definitely not a gradual change, there's something about 03.lr
and above have that 02.lr
and below don't have.
My output:
02.lr
: Residual norm: 1.1185073313739549e-1103.lr
: Residual norm: 0.2923536476758355cc @VikingScientist (could be something wrong with the lr-file perhaps)
The doc-folder contains two sub-folders with some documentation of the Wall-distance and Spalart-Allmaras integrands. As the implementation of these now are in the IFEM-NavierStokes repository, this documentation should also be placed there (and perhaps updated).
I am doing uniform refinement with LR B-splines. The polynomial degree is 2 and 3 respectively for the pressure and velocity, and their continuity is 1 everywhere except on the line from the origin to the leftmost corner, where it is 0. In the refinement, we do just 3 steps with \beta=100% (uniform).
The initial mesh, which is correct.
The next mesh, where the C0-line is wrong. IFEM decreases the continuity although it should not.
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<simulation>
<geometry dim="2">
<patchfile>Rec_Lshape.g2</patchfile>
<!-- <raiseorder patch="1" u="1" v="1"/> -->
<refine patch="1" type="uniform" u="7" v="7"/>
<topologysets>
<set name="sides" type="edge">
<item patch="1">1 2 3 4</item>
</set>
<set name="horizontal" type="edge">
<item patch="1">3 4</item>
</set>
</topologysets>
</geometry>
<stokes formulation="mixed">
<boundaryconditions>
<dirichlet set="sides" basis="1" comp="12" type="anasol"/>
</boundaryconditions>
<constrain_integrated_pressure>-0.0000052473917314548</constrain_integrated_pressure>
<no_div_in_recovery/>
<fluidproperties mu="1.0" rho="1.0"/>
<anasol type="expression">
<variables>
rect2pol(x,y,&r,&t);
r = if(below(r,1e-12),1e-12,r);
o = 1.5*PI;
a = 856399/1572864;
a1 = a+1;
a2 = a-1;
CA = cos(a*o);
P0 = sin(a1*t)*CA/a1-cos(a1*t)-sin(a2*t)*CA/a2+cos(a2*t);
P1 = cos(a1*t)*CA+a1*sin(a1*t)-cos(a2*t)*CA-a2*sin(a2*t);
P2 = -a1*sin(a1*t)*CA+a1*a1*cos(a1*t)+a2*sin(a2*t)*CA-a2*a2*cos(a2*t);
P3 = -a1*a1*(cos(a1*t)*CA+a1*sin(a1*t))+a2*a2*(cos(a2*t)*CA+a2*sin(a2*t));
u = (a1*sin(t)*P0+cos(t)*P1)*pow(r,a);
ux = (a*cos(2*t)*P1+(a1*a2*P0-P2)*0.5*sin(2*t))*pow(r,a2);
uy = (pow(cos(t),2)*P2+a*sin(2*t)*P1-a1*P0*(a2*pow(cos(t),2)-a))*pow(r,a2);
v = (sin(t)*P1-a1*cos(t)*P0)*pow(r,a);
vx = (-pow(sin(t),2)*P2+a*sin(2*t)*P1-a1*P0*(a2*pow(cos(t),2)+1))*pow(r,a2);
vy = -ux;
p = pow(r,a-1)*(a1*a1*P1+P3)/a2;
</variables>
<primary>u|v</primary>
<scalarprimary>p</scalarprimary>
<secondary>ux|uy|vx|vy</secondary>
</anasol>
<source type="expression">
rect2pol(x,y,&r,&t);
r = if(below(r,1e-16),1e-16,r);
o = 1.5*PI;
a = 856399/1572864;
a1 = a+1;
a2 = a-1;
CA = cos(a*o);
P0 = sin(a1*t)*CA/a1-cos(a1*t)-sin(a2*t)*CA/a2+cos(a2*t);
P1 = cos(a1*t)*CA+a1*sin(a1*t)-cos(a2*t)*CA-a2*sin(a2*t);
P2 = -a1*sin(a1*t)*CA+a1*a1*cos(a1*t)+a2*sin(a2*t)*CA-a2*a2*cos(a2*t);
P3 = -a1*a1*(cos(a1*t)*CA+a1*sin(a1*t))+a2*a2*(cos(a2*t)*CA+a2*sin(a2*t));
P4 = pow(a1,3)*(sin(a1*t)*CA-a1*cos(a1*t))+pow(a2,3)*(a2*cos(a2*t)-sin(a2*t)*CA);
S1 = a1*a1*P1+P3;
S2 = a1*a1*P2+P4;
D2u = (cos(t)*P3+a2*sin(t)*P2+a1*a1*(cos(t)*P1+a2*sin(t)*P0))*pow(r,a-2);
D2v = (sin(t)*P3-a2*cos(t)*P2+a1*a1*(sin(t)*P1-a2*cos(t)*P0))*pow(r,a-2);
px = (S1*cos(t)-S2*sin(t)/a2)*pow(r,a-2);
py = (S1*sin(t)+S2*cos(t)/a2)*pow(r,a-2);
-D2u+px | -D2v+py
</source>
</stokes>
<discretization>
<nGauss>5</nGauss>
</discretization>
<postprocessing>
<projection>
<CGL2/>
</projection>
</postprocessing>
<linearsolver>
<rtol>1e-10</rtol>
<ilu_fill_level>1</ilu_fill_level>
</linearsolver>
<adaptive>
<maxstep value="3"/>
<maxdof value="40000"/>
<beta type="symmetrized" value="100"/>
<errtol value="0.00000001"/>
<use_norm value="1"/>
<scheme value="isotropic_function"/>
<scheme>isotropic_function</scheme>
<store_eps_mesh/>
<store_errors/>
</adaptive>
</simulation>
200 1 0 0
2 0
5 3
0 0 0 1 1 2 2 2
3 3
0 0 0 1 1 1
0 -1
0 -0.5
0 0
0.5 0
1 0
-0.5 -1
-0.5 -0.25
-0.5 0.5
0.25 0.5
1 0.5
-1 -1
-1 0
-1 1
0 1
1 1
``
A very frequent error is wrong syntax in the xinp
files. Tags which are not registered are simply ignored which means that you never get notified when your solution is misbehaving since you wrote <stabilisation>
instead of <stabilization>
.
The user needs to be sufficiently warned when there is something wrong with the input file by either stopping program execution or issuing a clear warning message; preferably at the end of the output for increased visibility.
The Annulus2D and Annulus3D test models have dirichlet boundary conditions defined in local coordinates. This causes the VTF output of projected secondary solutions to produce incorrect results. This is because the extraction of patch-wise projected solution from the global solution vector does not account for the extra (non-spline) nodes.
Also the MidShipFrame model produce invalid projected fields on the VTF although this models does not have local coordinate systems, but uses immersed boundaries.
Therefore, the vtf-regression tests of these models (added in OPM/IFEM-Elasticity#49) currently have the projection switched off. Reactivate, and update those tests when this is fixed.
With #385 I also include the option to use UMFPACK in place of SuperLU in the L2-projection, motivated by the fact that SuperLU decides to produce NaN results (or core dump) instead of returning an error state when something is wrong with the coefficient matrix. However, with UMFAPCK I only got the first stress component out when projecting the stress tensor, all the others were zero! Probably an error in our UMFPACK wrapping. Guess that package itself is supposed to manage multiple RHSs.
To be able to run adaptive C1-continuous plate problems (KL) with symmetry boundary conditions (the problem I showed on the whiteboard today), I need to constrain the node that is adjacent to a boundary node, but not on the boundary itself (dvs. noden innafor). That is, we need to do something like ASMs2DC1::constrainEdge but for ASMu2D. Maybe we need a ASMu2DC1 class for that.
Hello
Is there a way to build IFEM on a windows environment with MSVC and Intel FORtran ?
I am trying but CMAKE bounces on the trying to find a suitable LAPACK lib while I can provide LAPACK and OPENBLAS
best
jac
There are essentially 4 ways of specifying parameters, here illustrated with the "mode" parameter to the "eigensolver" tag.
<simulation>
<eigensolver mode="1">
<mode value="2"> 3 </mode>
</eigensolver>
</simulation>
and run this with the commandline parameter <myProgram> <myInputFile> -eig 4
. The current ordering is that commandline takes precedence over .xinp
-file parameters, and then it goes in decreasing order 3,2 and finally mode="1"
which has the lowest priority. There is however varied support for this structure. For instance
<simulation>
<discretization nGauss="1">
<nGauss value="2"> 3 </nGauss>
</discretization>
</simulation>
only supports version 3 while
<simulation>
<adaptive maxstep="1">
<maxstep value="2"> 3 </maxstep>
</adaptive>
</simulation>
only supports version 2 and 3.
I'm trying to generate some .xinp-files (file generator) and was wondering what is "best practice" and what is the officially supported variants that we want to support. At the very least, I think the <nGauss>
-case above should suffice as a bug, since I think version 2 should be supported.
After the introduction of the basis function cache feature in #571, it is no longer possible to run models with Lagrange meshes where the mesh is read from a file instead of deriving it from a spline object. We had such a feature in reading matlab files, but without full unit test (we test reading the file itself and setting up the FE datastructure, but not integrating the FE matrices). Therefore this probably slipped.
Try the input file in L1-50x50.zip with LinEl, and it crashes due to de-referencing a null spline object pointer when setting up the cache in ASMs2DLag::integrate(). Not sure how to fix this properly, therefore opening the issue.
For linear elasticity when parsing the stress tag, the following ordering is applied
<elasticity>
<anasol>
<stress>
sigma_xx |
sigma_xy |
sigma_yy
</stress>
</anasol>
</elasticity>
This is not consistent with the Voigt notation σxx, σyy, σxy, which is assumed in all other parts of the source code.
One way is to fix SymmTensor STensorFuncExpr::evaluate (const Vec3& X) const
in src/Utility/ExprFunctions.C
.
Running 3D multipatch adaptive elasticity test "disc-stair-adap" and changing <maxstep>
in <adaptive>
to 6 makes the test crash in the final adaptive iteration. This segmentation fault happens during continuous global L2-projection of secondary solutions.
Neumann boundary conditions comes in two types. Either you have the exact solution, or you specify some fluxes/tractions.
Poisson
Exact solution available: You want to specify [du/dx, du/dy, du/dz] and have IFEM treat this appropriately. Input vector functions
Heat flux: You want to specify du/dn as some flux into/out of your domain. Input scalar functions
Elasticity
Exact solution available: You want to specify the stresses: sigma and have IFEM dot this with the normal vector. Input tensor functions.
Tractions: You want to specify sigma*n=[fx,fy,fz] as some directional forces on your boundary. Input vector functions.
Proposed solution:
Count the number of components (for instance pipe | symbols) specified by the user in the input file neumann tag and treat it as either pre-multiplied with the normal vector, or post-multiplied with it. For scalar equations one can have vector or scalar neumann conditions. For vector equations one can have tensor or vector neumann conditions.
Remove the 'direction' keyword altogether.
See last comment in PR #120
Could we add a filename
-attribute on the <vtfformat>
-tag so this is not tied the filename of the xinp-files running the simulation.
Maybe as an optional package? Since the core works without it.
Now I (and other amateurs) have to guess that libhdf5-dev is the one you need. But when used with PETSc maybe another package is required? And wasn't it an issue also with hdf5 and petsc toghether that made it difficult before? Or maybe that was only with the older Linux.
Anyway, it would be nice if the README told what to do in both cases (with PETSc and without). I have not installed PETSC yet, so it looks like libhdf5-dev does it.
Could we make it default to dump discretization information at the top of every log-file. Just to make it explicitly clear which ASM is used for this simulation. It's a jungle of different discretizations available, and with mixed and LR structured and lagrange and spectral.
Just dump something like this:
Model size: 6 patches
LR-splines used
Mixed methods: subgrid
or something of the sort. Ideally I would like to have it at the very top, right after "Equation solver" and "Integrand type", but seeing as one needs to parse the input file before finding this information, I guess I can accept to have it just before boundary conditions are being applied.
I only trigger this in Release mode on my OSX (i.e. with clang?) Problem traced back to 85a7024
Here is my scenario:
I'm trying to use a VirtualBox with Linux on my (more portable) Windows machine, and have cloned the sources in a shared folder in the Windows, say C:\KMO\VBox-share\IFEM... Inside the virtual linux this is mounted as, say /mnt/kmo/IFEM. This way I can look at and edit the files using the Windows editors, etc. However, the build folder I would like to reside inside the vbox only, so I created something like ~/IFEM-Release for that. This works well for building the IFEM core, doing cmake /mnt/kmo/IFEM ...
in there. But when I want to build the apps, there are too many assumptions around that your build folder is a sub-folder of the source directory. Is it possible to fix this without destroying all the other cmake-logics?
Yes, one solution is to always use IFEM_AS_SUBMODULE, but it would be nice if the other option worked also.
Another related issue: when I have the sources like this it appears that the file names became case insensitive. Then the file Math.h in src/Utility is confused with the system math.h file and you get the errors I showed you before I left. Is there a workaround for this, except for renaming Math.h (and Math.C) to something like MathUtl.[Ch] ?
The <collapse>
keyword is used to collapse an edge into a point or a face into a line when you have a polar parametrization (quarter hemisphere, for example). All control points along that edge/face are then assigned the same node number such that they share the same nodal DOFs at that location. But, depending on the orientation of your parametrization, this may ruin the element blocks for multi-threading, such that blocks that are assembled in parallel may try to update the same entry in the global matrix. This will then yield random and incorrect results locally.
The solution would be to check for whether <collapse>
has been used, and then orient the blocks accordingly, such that the collapsed edge/face is not spitted into several blocks.
In the meantime, the workaround is to run without multi-threading, using OMP_NUM_THREADS=1 (command)
The attached model Frame.zip fails in the 6th adaptive step with the error message:
Connecting P11 E3 to P8 E4 reversed? 0
*** ASMu2D::connectBasis: Non-matching edges, sizes 9 and 6
*** SIM2D::parse: Error establishing connection.
Parsing <topologysets>
*** SIMadmin::read: Failure occured while parsing "geometry".
Run with LinEl -adap Frame-AMR-b02-p2.xinp
to trigger, with #346 and OPM/IFEM-Elasticity#85. Apparently there is some control point mismatches over the patch interfaces although I thought that was supposed to be automatically handled by now?
The wiki already publicly available at https://github.com/akva2/IFEM/wiki, and it is more than good enough to put up here.
The current implementation is no good, it gives the correct Cartesian coordinates yes, but it also needs to return the index of the matching nodal (control) point, if any. This is needed, e.g., for running adaptive simulations of plate problems with point loads.
In the first glance I thought it should be as easy as checking which of the fe.N
values is equal to 1, but it seems like the LRSplineSurface::getElementContaining
method works differently, it does not necessarily return an element whose one of the nodes correspond to the given point. Is this a bug or by intention? If this could be fixed first, the evalPoint
fix will be straight forward.
The FindPetsc.cmake Module currently depends on pkg_check_modules returning PETSC_INCLUDE_DIRS. However, pkg-config deduplicates Paths and strips the include path if it is a default include path or a path already contained in the CFLAGS and similar environment variables when calling pkg-config. In HPC centers using environment modules with LMod or similar, this is very often the case. That leads to PETSC_INCLUDE_DIRS being empty and PETSc thus being disabled, although it is found. The behaviour can be circumvented by defining
set(ENV{PKG_CONFIG_ALLOW_SYSTEM_CFLAGS} 1)
It could be set and unset similar to the PKG_CONFIG path before.
Greetings
André
Useful things missing from the wiki documentation:
<partitioning procs="4" nperproc="1"/>
<include>some-shared-xinp-file.xinp</include>
probably more, but these are the ones I learned about in the last few days
The norm evaluations uses the same element groups for multi-threaded assembly as the matrix assembly, where effort is done to establish groups with no overlapping nodes that can be assembled in parallel. However, especially for unstructured LR-spline grids the element groups tend to be highly unbalanced and thereby often causing more overhead than we gain by the multiple threads.
However the norm evaluation only assemble into non-overlapping element arrays, and can therefore be done i parallel and in random error with no danger of conflicting updates. It would therefore be preferable to avoid the groups made for matrix assembly here, and instead distribute the elements evenly over the threads. This is especially important when the norm-integration involve analytical solutions with complex evaluations, like series summation. The norm-integration is then the main bottleneck of the computation as of now.
For C1 shell problems (KirchhoffLove) with symmetry boundary conditions, a collapsed edge due to a polar parametrization (quarter hemisphere), and inhomogeneous Dirichlet boundary conditions at the collapsed edge, the resolving of the chained MPC equations fails, such that SAM reports an inconsistency when setting up the data structures for the element assembly. This happens both for linear and nonlinear problems, although the MPC-processing is slightly different in the two cases.
The failure can be reproduced from the shell-support branches (until they are merged) for IFEM and IFEM-Elasticity, and then running LinEl -2DKLshell PolePinchedHemisphere_p2.xinp
with the attached input files.
ASMs2D::findElementContaining returns wrong element index if the specified parameters matches a knot with multiplicity 2 or higher. This implies we cannot run the Kirchhoff plate problem with point load, where the regularity is reduced to better fit the sharpness of the solution under the point load.
XINP-file and G2-file provided below. This is run with mpirun -np 4 Poisson
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<simulation>
<!-- General - geometry definitions !-->
<geometry>
<patchfile>quadpiece.g2</patchfile>
<partitioning procs="4">
<part proc="0" lower="1" upper="1"/>
<part proc="1" lower="2" upper="2"/>
<part proc="2" lower="3" upper="3"/>
<part proc="3" lower="4" upper="4"/>
</partitioning>
<topology>
<connection master="1" mface="2" slave="2" sface="1"/>
<connection master="1" mface="4" slave="3" sface="3"/>
<connection master="1" mface="6" slave="4" sface="5"/>
</topology>
<topologysets>
<set name="boundary" type="face">
<item patch="4">1 2 3 4 6</item>
<item patch="3">1 2 4 5 6</item>
<item patch="2">2 3 4 5 6</item>
<item patch="1">1 2 3 5</item>
</set>
</topologysets>
</geometry>
<!-- General - boundary conditions !-->
<boundaryconditions>
<dirichlet set="boundary" comp="1"/>
</boundaryconditions>
<!-- Problem-specific block !-->
<poisson>
<source>1</source>
</poisson>
</simulation>
G2-file
700 1 0 0
3 0
2 2
0 0 1 1
2 2
0 0 1 1
2 2
0 0 1 1
-1 -1 -1
0 -1 -1
-1 0 -1
0 0 -1
-1 -1 0
0 -1 0
-1 0 0
0 0 0
700 1 0 0
3 0
2 2
0 0 1 1
2 2
0 0 1 1
2 2
0 0 1 1
0 -1 -1
1 -1 -1
0 0 -1
1 0 -1
0 -1 0
1 -1 0
0 0 0
1 0 0
700 1 0 0
3 0
2 2
0 0 1 1
2 2
0 0 1 1
2 2
0 0 1 1
-1 0 -1
0 0 -1
-1 1 -1
0 1 -1
-1 0 0
0 0 0
-1 1 0
0 1 0
700 1 0 0
3 0
2 2
0 0 1 1
2 2
0 0 1 1
2 2
0 0 1 1
-1 -1 0
0 -1 0
-1 0 0
0 0 0
-1 -1 1
0 -1 1
-1 0 1
0 0 1
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<simulation>
<!-- General - geometry definitions !-->
<geometry>
<topologysets>
<set name="Boundary" type="edge">
<item patch="1">1 2 3 4</item>
</set>
</topologysets>
</geometry>
<!-- General - boundary conditions !-->
<boundaryconditions>
<dirichlet set="Boundary" comp="1"/>
</boundaryconditions>
<!-- Problem-specific block !-->
<poisson>
<source type="expression">1</source>
</poisson>
</simulation>
Running this with the flags -2D -LR
crashes IFEM. The default geometry in 2D works fine if not using LR, and it also works fine for 3D LR, but the combination 2D/LR will cause the crash.
One cannot set the fill level for ilu preconditioners in petsc
Trying to build IFEM on the Ubuntu 16.04 app which is available on WIndows 10 (as alternative to using a virtual machine). But then it appears that the SuperLU package listed in README.md is wrong:
kmo@OSLN33983319A:~$ sudo apt-get install libsuperlu3-dev
[sudo] password for kmo:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package libsuperlu3-dev is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'libsuperlu3-dev' has no installation candidate
However, the package libsuperlu-dev exisits (which apparently points to superlu4):
kmo@OSLN33983319A:~$ sudo apt-get install libsuperlu-dev
[sudo] password for kmo:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libsuperlu4
Suggested packages:
libsuperlu-doc
The following NEW packages will be installed:
libsuperlu-dev libsuperlu4
0 upgraded, 2 newly installed, 0 to remove and 78 not upgraded.
Need to get 332 kB of archives.
After this operation, 1,545 kB of additional disk space will be used.
Do you want to continue? [Y/n]
I assume this means the README was originally written for an older Ubuntu than 16.04 and that the SuperLU version available depends on the Ubuntu version. Therefore, we should probably refer to libsuperlu-dev instead in the installation instructions?
============================
I encountered a problem by trying to installing IFEM from source (dune version 2.4.1), I have the following error, it seems there's an issue with "superlu.hh"
...
[ 41%] Building CXX object CMakeFiles/IFEM.dir/src/LinAlg/ISTLMatrix.C.o
In file included from /usr/local/include/dune/istl/overlappingschwarz.hh:14:0,
from /home/mt/OPM/MODULES/IFEM/src/LinAlg/ISTLSupport.h:21,
from /home/mt/OPM/MODULES/IFEM/src/LinAlg/ISTLSolParams.h:19,
from /home/mt/OPM/MODULES/IFEM/src/LinAlg/ISTLMatrix.h:20,
from /home/mt/OPM/MODULES/IFEM/src/LinAlg/ISTLMatrix.C:14:
/usr/local/include/dune/istl/superlu.hh: In member function ‘void Dune::SuperLU<Dune::BCRSMatrix<Dune::FieldMatrix<K, n, p>, TA> >::decompose()’:
/usr/local/include/dune/istl/superlu.hh:524:25: error: ‘struct SuperLUStat_t’ has no member named ‘expansions’
std::cout<<stat.expansions<<std::endl;
^
In file included from /usr/local/include/dune/istl/overlappingschwarz.hh:14:0,
from /home/mt/OPM/MODULES/IFEM/src/LinAlg/ISTLSupport.h:21,
from /home/mt/OPM/MODULES/IFEM/src/LinAlg/ISTLSolParams.h:19,
from /home/mt/OPM/MODULES/IFEM/src/LinAlg/ISTLMatrix.h:20,
from /home/mt/OPM/MODULES/IFEM/src/LinAlg/ISTLMatrix.C:14:
/usr/local/include/dune/istl/superlu.hh: In member function ‘void Dune::SuperLU<Dune::BCRSMatrix<Dune::FieldMatrix<K, n, p>, TA> >::apply(Dune::SuperLU<Dune::BCRSMatrix<Dune::FieldMatrix<K, n, p>, TA> >::domain_type&, Dune::SuperLU<Dune::BCRSMatrix<Dune::FieldMatrix<K, n, p>, TA> >::range_type&, Dune::InverseOperatorResult&)’:
/usr/local/include/dune/istl/superlu.hh:602:24: error: ‘SLU_DOUBLE’ was not declared in this scope
options.IterRefine=SLU_DOUBLE;
^
/usr/local/include/dune/istl/superlu.hh: In member function ‘void Dune::SuperLU<Dune::BCRSMatrix<Dune::FieldMatrix<K, n, p>, TA> >::apply(T*, T*)’:
/usr/local/include/dune/istl/superlu.hh:685:24: error: ‘SLU_DOUBLE’ was not declared in this scope
options.IterRefine=SLU_DOUBLE;
^
make[2]: *** [CMakeFiles/IFEM.dir/src/LinAlg/ISTLMatrix.C.o] Error 1
make[1]: *** [CMakeFiles/IFEM.dir/all] Error 2
make: *** [all] Error 2
Any suggestion?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.