Giter VIP home page Giter VIP logo

vpic's Introduction

Vector Particle-In-Cell (VPIC) Project

Welcome to the legacy version of VPIC! The new version of VPIC, based on the Kokkos performance portable framework, is available here: https://github.com/lanl/vpic-kokkos. This legacy version is no longer under active development, and new users are encouraged to use the Kokkos version.

VPIC is a general purpose particle-in-cell simulation code for modeling kinetic plasmas in one, two, or three spatial dimensions. It employs a second-order, explicit, leapfrog algorithm to update charged particle positions and velocities in order to solve the relativistic kinetic equation for each species in the plasma, along with a full Maxwell description for the electric and magnetic fields evolved via a second- order finite-difference-time-domain (FDTD) solve. The VPIC code has been optimized for modern computing architectures and uses Message Passing Interface (MPI) calls for multi-node application as well as data parallelism using threads. VPIC employs a variety of short-vector, single-instruction-multiple-data (SIMD) intrinsics for high performance and has been designed so that the data structures align with cache boundaries. The current feature set for VPIC includes a flexible input deck format capable of treating a wide variety of problems. These include: the ability to treat electromagnetic materials (scalar and tensor dielectric, conductivity, and diamagnetic material properties); multiple emission models, including user-configurable models; arbitrary, user-configurable boundary conditions for particles and fields; user- definable simulation units; a suite of "standard" diagnostics, as well as user-configurable diagnostics; a Monte-Carlo treatment of collisional processes capable of treating binary and unary collisions and secondary particle generation; and, flexible checkpoint-restart semantics enabling VPIC checkpoint files to be read as input for subsequent simulations. VPIC has a native I/O format that interfaces with the high-performance visualization software Ensight and Paraview. While the common use cases for VPIC employ low-order particles on rectilinear meshes, a framework exists to treat higher-order particles and curvilinear meshes, as well as more advanced field solvers.

Attribution

Researchers who use the VPIC code for scientific research are asked to cite the papers by Kevin Bowers listed below.

  1. Bowers, K. J., B. J. Albright, B. Bergen, L. Yin, K. J. Barker and D. J. Kerbyson, "0.374 Pflop/s Trillion-Particle Kinetic Modeling of Laser Plasma Interaction on Road-runner," Proc. 2008 ACM/IEEE Conf. Supercomputing (Gordon Bell Prize Finalist Paper). http://dl.acm.org/citation.cfm?id=1413435

  2. K.J. Bowers, B.J. Albright, B. Bergen and T.J.T. Kwan, Ultrahigh performance three-dimensional electromagnetic relativistic kinetic plasma simulation, Phys. Plasmas 15, 055703 (2008); http://dx.doi.org/10.1063/1.2840133

  3. K.J. Bowers, B.J. Albright, L. Yin, W. Daughton, V. Roytershteyn, B. Bergen and T.J.T Kwan, Advances in petascale kinetic simulations with VPIC and Roadrunner, Journal of Physics: Conference Series 180, 012055, 2009

Getting the Code

To checkout the VPIC source, do the following:

    git clone https://github.com/lanl/vpic.git

Branches

The stable release of vpic exists on master, the default branch.

For more cutting edge features, consider using the devel branch.

User contributions should target the devel branch.

Requirements

The primary requirement to build VPIC is a C++11 capable compiler and an up-to-date version of MPI.

Build Instructions

    cd vpic 

VPIC uses the CMake build system. To configure a build, do the following from the top-level source directory:

    mkdir build
    cd build

The ./arch directory also contains various cmake scripts (including specific build options) which can help with building, but the user is left to select which compiler they wish to use. The scripts are largely organized into folders by compiler, with specific flags and options set to match the target compiler.

Any of the arch scripts can be invoked specifying the file name from inside a build directory:

    ../arch/reference-Debug

After configuration, simply type:

    make

Three scripts in the ./arch directory are of particular note: lanl-ats1-hsw, lanl-ats1-knl and lanl-cts1. These scripts provide a default way to build VPIC on LANL ATS-1 clusters such as Trinity and Trinitite and LANL CTS-1 clusters. The LANL ATS-1 clusters are the first generation of DOE Advanced Technology Systems and consist of a partition of dual socket Intel Haswell nodes and a partition of single socket Intel Knights Landing nodes. The LANL CTS-1 clusters are the first generation of DOE Commodity Technology Systems and consist of dual socket Intel Broadwell nodes running the TOSS 3.3 operating system. The lanl-ats1-hsw, lanl-ats1-knl and lanl-cts1 scripts are heavily documented and can be configured to provide a large variety of custom builds for their respective platform types. These scripts could also serve as a good starting point for development of a build script for other platform types. Because these scripts also configure the users build environment via the use of module commands, the scripts run both the cmake and make commands.

From the user created build directory, these scripts can be invoked as follows:

    ../arch/lanl-ats1-hsw

or

    ../arch/lanl-ats1-knl

or

    ../arch/lanl-cts1

Advanced users may choose to instead invoke cmake directly and hand select options. Documentation on valid ways to select these options may be found in the lanl-ats1 and lanl-cts1 build scripts mentioned above.

GCC users should ensure the -fno-strict-aliasing compiler flag is set (as shown in ./arch/generic-gcc-sse).

Building an example input deck

After you have successfully built VPIC, you should have an executable in the bin directory called vpic (./bin/vpic). To build an executable from one of the sample input decks (found in ./sample), simply run:

    ./bin/vpic input_deck

where input_deck is the name of your sample deck. For example, to build the harris input deck in the sample subdirectory (assuming that your build directory is located in the top-level source directory):

    ./bin/vpic ../sample/harris

Beginners are advised to read the harris deck thoroughly, as it provides many examples of common uses cases.

Command Line Arguments

Note: Historic VPIC users should note that the format of command line arguments was changed in the first open source release. The equals symbol is no longer accepted, and two dashes are mandatory.

In general, command line arguments take the form --command value, in which two dashes are followed by a keyword, with a space delimiting the command and the value.

The following specific syntax is available to the users:

Threading

Threading (per MPI rank) can be enabled using the following syntax:

    ./binary.Linux --tpp n

Where n specifies the number of threads

Example:

    mpirun -n 2 ./binary.Linux --tpp 2

To run with VPIC with two threads per MPI rank.

Checkpoint Restart

VPIC can restart from a checkpoint dump file, using the following syntax:

    ./binary.Linux --restore <path to file>

Example:

    ./binary.Linux --restore ./restart/restart0 

To restart VPIC using the restart file ./restart/restart0

Compile Time Arguments

Currently, the following options are exposed at compile time for the users consideration:

Particle Array Resizing

  • DISABLE_DYNAMIC_RESIZING (default OFF): Enable to disable the use of dynamic particle resizing
  • SET_MIN_NUM_PARTICLES (default 128 [4kb]): Set the minimum number of particles allowable when dynamically resizing

Threading Model

  • USE_PTHREADS: Use Pthreads for threading model, (default ON)
  • USE_OPENMP: Use OpenMP for threading model

Vectorization

The following CMake variables are used to control the vector implementation that VPIC uses for each SIMD width. Currently, there is support for 128 bit, 256 bit and 512 bit SIMD widths. The default is for each of these CMake variables to be disabled which means that an unvectorized reference implementation of functions will be used.

  • USE_V4_SSE: Enable 4 wide (128-bit) SSE

  • USE_V4_AVX: Enable 4 wide (128-bit) AVX

  • USE_V4_AVX2: Enable 4 wide (128-bit) AVX2

  • USE_V4_ALTIVEC: Enable 4 wide (128-bit) Altivec

  • USE_V4_PORTABLE: Enable 4 wide (128-bit) portable implementation

  • USE_V8_AVX: Enable 8 wide (256-bit) AVX

  • USE_V8_AVX2: Enable 8 wide (256-bit) AVX2

  • USE_V8_PORTABLE: Enable 8 wide (256-bit) portable implementation

  • USE_V16_AVX512: Enable 16 wide (512-bit) AVX512

  • USE_V16_PORTABLE: Enable 16 wide (512-bit) portable implementation

Several functions in VPIC have vector implementations for each of the three SIMD widths. Some only have a single implementation. An example of the latter is move_p which only has a reference implementation and a V4 implementation.

It is possible to have a single CMake vector variable configured as ON for each of the three supported SIMD vector widths. It is recommended to always have a CMake variable configured as ON for the 128 bit SIMD vector width so that move_p will be vectorized. In addition, it is recommended to configure as ON the CMake variable that is associated with the native SIMD vector width of the processor that VPIC is targeting. If a CMake variable is configured as ON for each of the three available SIMD vector widths, then for a given function in VPIC, the implementation which supports the largest SIMD vector length will be chosen. If a V16 implementation exists, it will be chosen. If a V16 implementation does not exist but V8 and V4 implementations exist, the V8 implementation will be chosen. If V16 and V8 implementations do not exist but a V4 implementation does, it will be chosen. If no SIMD vector implementation exists, the unvectorized reference implementation will be chosen.

In summary, when using vector versions on a machine with 256 bit SIMD, the V4 and V8 implementations should be configured as ON. When using a machine with 512 bit SIMD, V4 and V16 implementations should be configured as ON. When choosing a vector implementation for a given SIMD vector length, the implementation that is closest to the SIMD instruction set for the targeted processor should be chosen. The portable versions are most commonly used for debugging the implementation of new intrinsics versions. However, the portable versions are generally more performant than the unvectorized reference implemenation. So, one might consider using the V4_PORTABLE version on ARM processors until a V4_NEON implementation becomes available.

Output

  • VPIC_PRINT_MORE_DIGITS: Enable more digits in timing output of status reports

Particle sorting implementation

The CMake variable below allows building VPIC to use the legacy, thread serial implementation of the particle sort algorithm.

  • USE_LEGACY_SORT: Use legacy thread serial particle sort, (default OFF)

The legacy particle sort implementation is the thread serial particle sort implementation from the legacy v407 version of VPIC. This implementation supports both in-place and out-of-place sorting of the particles. It is very competitive with the thread parallel sort implementation for a small number of threads per MPI rank, i.e. 4 or less, especially on KNL because sorting the particles in-place allows the fraction of particles stored in High Bandwidth Memory (HBM) to remain stored in HBM. Also, the memory footprint of VPIC is reduced by the memory of a particle array which can be significant for particle dominated problems.

The default particle sort implementation is a thread parallel implementation. Currently, it can only perform out-of-place sorting of the particles. It will be more performant than the legacy implementation when using many threads per MPI rank but uses more memory because of the out-of-place sort.

Workflow

Contributors are asked to be aware of the following workflow:

  1. Pull requests are accepted into devel upon tests passing
  2. master should reflect the stable state of the code
  3. Periodic releases will be made from devel into master

Feedback

Feedback, comments, or issues can be raised through GitHub issues.

A mailing list for open collaboration can also be found here

Versioning

Version release summary:

V1.2 (October 2020)

  • Improved Neon intrinsics support
  • Added Takizuka-Abe collision operator
  • Threaded hydro_p pipelines
  • Added unit documentation

V1.1 (March 2019)

  • Added V8 and V16 functionality
  • Improved documentation and build processes
  • Significantly improved testing and correctness capabilities

V1.0

Initial release

Release

This software has been approved for open source release and has been assigned LA-CC-15-109.

Copyright

© (or copyright) 2020. Triad National Security, LLC. All rights reserved. This program was produced under U.S. Government contract 89233218CNA000001 for Los Alamos National Laboratory (LANL), which is operated by Triad National Security, LLC for the U.S. Department of Energy/National Nuclear Security Administration. All rights in the program are reserved by Triad National Security, LLC, and the U.S. Department of Energy/National Nuclear Security Administration. The Government is granted for itself and others acting on its behalf a nonexclusive, paid-up, irrevocable worldwide license in this material to reproduce, prepare derivative works, distribute copies to the public, perform publicly and display publicly, and to permit others to do so.

License

VPIC is distributed under a BSD license.

vpic's People

Contributors

chuckcranor avatar dnystrom1 avatar jjaraalm avatar junghans avatar mkuchnik avatar rfbird avatar svluedtke avatar svluedtke-lanl avatar tuxfan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vpic's Issues

Integer overflow in dumpmacros.h

In the WRITE_HEADER_V0 macro below, the values 0xcafe and 0xdeadbeef are assigned to a short int and int respectively:

#define WRITE_HEADER_V0(dump_type,sp_id,q_m,cstep,fileIO) do { \
/* Binary compatibility information */ \
WRITE( char, CHAR_BIT, fileIO ); \
WRITE( char, sizeof(short int), fileIO ); \
WRITE( char, sizeof(int), fileIO ); \
WRITE( char, sizeof(float), fileIO ); \
WRITE( char, sizeof(double), fileIO ); \
WRITE( short int, 0xcafe, fileIO ); \
WRITE( int, 0xdeadbeef, fileIO ); \
WRITE( float, 1.0, fileIO ); \

But the guaranteed minimum upper limits of short int and int are 0x7fff and 0x7fffffff, so the constants could overflow. From what I've seen online, signed integer overflow is undefined behaviour, so could technically give different values with different compilers. Presumably all compilers handle it the same way so it's never been a problem.

Two possible solutions that don't break people's postprocessing tools:

  • Leave it as is, because it probably doesn't matter
  • Assign the overflowed values instead

test failures on grizzly

Hi,

I tried to run the existing ctests on grizzly and got failed tests. I ran

cmake -DENABLE_INTEGRATED_TESTS=ON -DENABLE_UNIT_TESTS=ON .. followed by make -j 36 (which was surprisingly slow) and then ctest.

pcomm, dump, parallel and test_collision_script failed. The failure for pcomm was

8: *** Advancing 8: Recving particle with global_id 0 on rank 1 8: Recving particle with global_id 0 on rank 3 8: Recving particle with global_id 0 on rank 7 8: [gr1485:16752] *** Process received signal *** 8: [gr1485:16752] Signal: Segmentation fault (11) 8: [gr1485:16752] Signal code: Address not mapped (1) 8: [gr1485:16752] Failing at address: (nil) 8: [gr1485:16754] *** Process received signal *** 8: [gr1485:16754] Signal: Segmentation fault (11) 8: [gr1485:16754] Signal code: Address not mapped (1) 8: [gr1485:16754] Failing at address: (nil) 8: [gr1485:16758] *** Process received signal *** 8: [gr1485:16758] Signal: Segmentation fault (11) 8: [gr1485:16758] Signal code: Address not mapped (1) 8: [gr1485:16758] Failing at address: (nil) 8: [gr1485:16752] [ 0] /lib64/libpthread.so.0(+0xf630)[0x2b38225a9630] 8: [gr1485:16752] [ 1] pcomm(find_species_name+0x16)[0x413816] 8: [gr1485:16752] [ 2] pcomm(_ZN15vpic_simulation16user_diagnosticsEv+0x1f)[0x40a15f] 8: [gr1485:16754] [ 0] /lib64/libpthread.so.0(+0xf630)[0x2b66a4372630] 8: [gr1485:16754] [ 1] [gr1485:16758] [ 0] /lib64/libpthread.so.0(+0xf630)[0x2b596445f630] 8: [gr1485:16758] [ 1] pcomm(find_species_name+0x16)[0x413816] 8: [gr1485:16758] [ 2] pcomm(_ZN15vpic_simulation16user_diagnosticsEv+0x1f)[0x40a15f] 8: [gr1485:16758] [ 3] pcomm(_ZN15vpic_simulation7advanceEv+0xbcf)[0x41ad2f] 8: [gr1485:16758] [ 4] pcomm(main+0x15f)[0x409c3f] 8: [gr1485:16758] [ 5] pcomm(find_species_name+0x16)[0x413816] 8: [gr1485:16754] [ 2] pcomm(_ZN15vpic_simulation16user_diagnosticsEv+0x1f)[0x40a15f] 8: [gr1485:16754] [ 3] pcomm(_ZN15vpic_simulation7advanceEv+0xbcf)[0x41ad2f] 8: [gr1485:16754] [ 4] pcomm(main+0x15f)[0x409c3f] 8: [gr1485:16754] [ 5] /lib64/libc.so.6(__libc_start_main+0xf5)[0x2b66a542b545] 8: [gr1485:16754] [ 6] pcomm[0x409a29] 8: [gr1485:16754] *** End of error message *** 8: [gr1485:16752] [ 3] pcomm(_ZN15vpic_simulation7advanceEv+0xbcf)[0x41ad2f] 8: [gr1485:16752] [ 4] pcomm(main+0x15f)[0x409c3f] 8: [gr1485:16752] [ 5] /lib64/libc.so.6(__libc_start_main+0xf5)[0x2b3823662545] 8: [gr1485:16752] [ 6] pcomm[0x409a29] 8: [gr1485:16752] *** End of error message *** 8: /lib64/libc.so.6(__libc_start_main+0xf5)[0x2b5965518545] 8: [gr1485:16758] [ 6] pcomm[0x409a29] 8: [gr1485:16758] *** End of error message *** 8: -------------------------------------------------------------------------- 8: mpiexec noticed that process rank 1 with PID 16752 on node gr1485 exited on signal 11 (Segmentation fault).

visualization of results

Could you add some readme notes on how visualization is done? My experience with other simulation tools is that a separate program is used for visualizing the simulation results.

vectorized `hydro_p` advances particles to `t+0.5dt`

hydro_p_pipeline_v4.cc, hydro_p_pipeline_v8.cc, and hydro_p_pipeline_v16.cc advance particle velocities to t+0.5dt instead of t as in hydro_p_pipeline.cc. The resulting hydro quantities are not at the same time as the fields. Data analysis using both hydro and fields tends to have problems. The additional momentum update after Boris rotation is the problem and should be removed.

ux  += hax;
uy  += hay;
uz  += haz;

Material ids are not output correctly when using `field_dump`

Trying to output material_ids is broken

vpic/src/vpic/dump.cc

Lines 643 to 644 in cd46b8a

const uint32_t * fref = reinterpret_cast<uint32_t *>(&f(i,j,k));
fileIO.write(&fref[varlist[v]], 1);

and

vpic/src/vpic/dump.cc

Lines 659 to 660 in cd46b8a

const uint32_t * fref = reinterpret_cast<uint32_t *>(&f(ioff,joff,koff));
fileIO.write(&fref[varlist[v]], 1);

assume that all fields of a field_t are 4 bytes wide, but material_id is only 2 bytes. The resulting output is not as expected.

Add Native Tracer Functionality

Required Features:

  • Add a per-species unique particle ID:
    • Add a way that the overhead/functionality can be disabled at compile time
    • Add a way to only have the global IDs be enabled for certain species
  • Have an optional way to list which global_ids in a species are in fact tracers to support dynamic tracers populations ("fast ones", "ones who hit wall X", etc):
    • Have some nice easy user-callable method to populate tracers (every nth, x%)
    • Have some predicate based way to select particles? (Partially implemented in global_particle_id branch)
    • [OPTIONAL] Is there some smart way where we can manage an explicit tracer species, where we collected marked tracers? Maybe we don't need this..
  • Allow the user to store custom and configurable data for tracer particles:
    • Add some additional array to store data which is user selectable?
    • Let the user select the type of this array at compile time (default: float)
  • Sane and nice IO interface
    • Buffering up multiple timesteps before actually dumping
    • Try minimize the number of off load analysis / processing required
    • Avoid writing a million tiny files
    • HDF5 are first output type, add support for VPIC Binary Format later (similar to existing input-deck tracer functionality)
      • Can we make the IO format line up with the proposed changes in
    • [OPTIONAL] Nice example of users can hunt down tracers which left a given rank

Code Quality:

  • Provide example deck that uses the new tracer interface
    • Must have an good simple example of how to iterate over tracers and calculate some derived quantity touching fields (likely as a user_diagnostics)
  • Add a unit/"deck" test that uses tracer interface
    • Add a unit test to check if ability to compile tracer in/out works
    • Test if it survives a check point restart

VPIC crashes if the decomposition leaves the grid size as nx/ny/nz=1

This is obviously not a very likely scenario, as we run on meshes much bigger than this, but it may be a useful case for testing computational intensive modules (and writing unit tests etc).

The crux of the problem seems to come down to a check that, for each dimension, does:

const float px = (nx>1) ? g->rdx : 0;

If n_=1 it will set p_ to be 0. This is then used in a division (alphadt = 0.3888889/( px*px + py*py + pz*pz ); ) so inf/nans propagates throughout anything that touches the fields.

error when starting the Harris test

Hi devs:
I am new to using the VPIC for studying collisionless shocks in space physics. After 'configure' and 'make' steps, which are all succeful, an error occurred when submitting the Harris test in sample folder. Can someone provide me some clues to solve the problem?

Thanks a lot!

#-------------------------------------------------------------------------------------#

/usr/bin/c++ -DVPIC_USE_PTHREADS -rdynamic -I. -I/home/ckyu/vpic-master/src -std=c++11 -I/data/software/intel/oneapi/mpi/2021.5.1/include -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /data/software/intel/oneapi/mpi/2021.5.1/lib/release -Xlinker -rpath -Xlinker /data/software/intel/oneapi/mpi/2021.5.1/lib -Xlinker --enable-new-dtags -L/data/software/intel/oneapi/mpi/2021.5.1/lib -g -DVPIC_USE_PTHREADS -DINPUT_DECK=../sample/harris /home/ckyu/vpic-master/deck/main.cc /home/ckyu/vpic-master/deck/wrapper.cc -o harris.Linux -Wl,-rpath,/home/ckyu/vpic-master/ckyu_build -L/home/ckyu/vpic-master/ckyu_build -lvpic /data/software/intel/oneapi/mpi/2021.5.1/lib/libmpicxx.so /data/software/intel/oneapi/mpi/2021.5.1/lib/release/libmpi.so /usr/lib64/librt.so /usr/lib64/libpthread.so /usr/lib64/libdl.so /data/software/intel/oneapi/mpi/2021.5.1/lib/release/libmpi.so /usr/lib64/librt.so /usr/lib64/libpthread.so /usr/lib64/libdl.so -lpthread -ldl

Error in HDF5 writing of attribute field

HDF5 attributes writen to the file can be incorrect depending on which process writes last. Take for example writing the field attribute VPIC-ArrayUDF-GEO,

VPIC does (by all the processes):

 float attr_data[2][3];
   attr_data[0][0] = grid->x0;
   attr_data[0][1] = grid->y0;
   attr_data[0][2] = grid->z0;
   attr_data[1][0] = grid->dx;
   attr_data[1][1] = grid->dy;
   attr_data[1][2] = grid->dz;
   hsize_t dims[2];
   dims[0] = 2;
   dims[1] = 3;
   hid_t va_geo_dataspace_id = H5Screate_simple(2, dims, NULL);
   hid_t va_geo_attribute_id = H5Acreate2(file_id, "VPIC-ArrayUDF-GEO", H5T_IEEE_F32BE, va_geo_dataspace_id, H5P_DEFAULT, H5P_DEFAULT);
   H5Awrite(va_geo_attribute_id, H5T_NATIVE_FLOAT, attr_data);

but for the HarrisHDF5 case, grid->y0 can have different values between processes. Hyperslab selection can not be used with attributes, so all the processes write their entire array. Hence, the value that gets written to the attribute data is whichever process writes last.

user manual

Hello,

Dear developers, I have two questions:

1- as far as I know the VPIC code is open source so every Body can use it? is it correct.?
2- is there any Manual which I can read to understand how to create an Input file

thanks in advance
masoud

VPIC Versioning

As far as I'm aware, there is no versioning convention used in the VPIC source apart from distinguishing between "v4" and the master branch.

Can a VPIC versioning convention be adopted? The big differences are between v407 and master, but there may also be slight version differences between releases which may result in different output. Not having a versioning convention means an ad-hoc approach to versioning deployments, such as timestamps.

band_interleave hydro_dump

Hi,

This is a new user question: In src/vpic/dump.cc, the band_interleave branch of hydro_dump seems to not correctly set the dim array to include ghost layer:

    dim[0] = nxout;
    dim[1] = nyout;
    dim[2] = nzout;

while the field_dump does

    dim[0] = nxout+2;
    dim[1] = nyout+2;
    dim[2] = nzout+2;

It appears to me that the two extra cells are ghost cells. For direct dump (no stride), I believe the dim array should include them, since the memory block is dumped as a whole.

Thanks,
Liang

PS:

NAN appears after about 44 steps in simulation when I use move_p under v4_acceleration.

I am trying to improve the performance of move_p, and when I use move_p_v4( ),which is the move_p( ) when V4_ACCELERATION is defined, some data are calculated wrong and after some steps(like 44 steps in my case), NAN shows up in interpolator's data, it influences all data I use and gives the wrong answer in following calculating.
I tried to use gdb to trace the bug, but it doesn't seem to be obvious and it's hard to find out why, what can I do?

inbound license for contributions

Can you confirm that contributions to the project are licensed inbound to the project under the same license as the outbound license?

thanks

Ignored Particles in Pusher Should have their update to cell index undo to avoid accessing out of bounds memory

https://github.com/lanl/vpic/blob/devel/src/species_advance/standard/pipeline/advance_p_pipeline.cc#L310

Currently only throws a warning about a situation where we are likely allowing the code to go out of bounds in memory. We should either

A) Have this section https://github.com/lanl/vpic/blob/devel/src/species_advance/standard/pipeline/advance_p_pipeline.cc#L230 undo the bit shift done by move_p

or B) throw an error and die

Momentum Normalization

The momentum (ux, uy, uz) needs to be normalized by the particle mass * speed of light, but this does not seem to be documented anywhere

compiler optimzation flags: CMAKE_BUILD_TYPE vs CMAKE_[lang]_FLAGS

The traditional way to set compiler optimization flags with cmake is to use CMAKE_BUILD_TYPE. The defaults with cmake are something like:

CMAKE_C_FLAGS_DEBUG:STRING=-g
CMAKE_C_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG
CMAKE_C_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
CMAKE_C_FLAGS_RELWITHDEBINFO:STRING=-O2 -g -DNDEBUG

For portability, the exact flag string used by cmake depends on which compiler it detects (since different compilers may have different flag syntax). The above gcc flags come from Modules/Compiler/GNU.cmake in the cmake library directory.

CMAKE_[lang]FLAGS[build_type] gets merged into CMAKE_[lang]_FLAGS. Note that if CMAKE_BUILD_TYPE is not specified (i.e. empty) then nothing is merged.

If you specify both CMAKE_BUILD_TYPE and put optimization flags in CMAKE_[lang]_FLAGS at the same time, you'll get both sets of flags in your compile command line. Example:

% cat ../CMakeLists.txt
add_executable(testin testin.c)
% 
% cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_FLAGS="-O0" ..
-- The C compiler identification is GNU 5.5.0
-- The CXX compiler identification is GNU 5.5.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Configuring done
-- Generating done
-- Build files have been written to: /tmp/testin/b
% 
% make VERBOSE=1 testin.o
make -f CMakeFiles/testin.dir/build.make CMakeFiles/testin.dir/testin.c.o
Building C object CMakeFiles/testin.dir/testin.c.o
/usr/bin/cc   -O0 -O3 -DNDEBUG -o CMakeFiles/testin.dir/testin.c.o   -c /tmp/testin/testin.c
%   

Note that both "-O0" (from CMAKE_C_FLAGS) and "-O3 -DNDEBUG" (from CMAKE_BUILD_TYPE=Release) are both specified on the compile command line.

This explains why "-O3" unexpectedly appears twice in the compile command ("make VERBOSE=1") when you run lanl-cts1 on GR, for example:

[  3%] Building C object CMakeFiles/vpic.dir/src/boundary/link.c.o
/usr/projects/hpcsoft/toss3/grizzly/openmpi/2.1.2-intel-18.0.5/bin/mpicc -DMIN_NP=128 -DUSE_V4_AVX2 -DUSE_V8_AVX2 -DVPIC_USE_LEGACY_SORT -DVPIC_USE_PTHREADS -I/users/ccranor/src/vpic  -g -O3 -inline-forceinline -qoverride-limits -no-ansi-alias -Winline -qopt-report=5 -qopt-report-phase=all -diag-disable 10397 -Wl,--export-dynamic -O3 -DNDEBUG   -std=gnu99 -o CMakeFiles/vpic.dir/src/boundary/link.c.o   -c /users/ccranor/src/vpic/src/boundary/link.c

It would be better to let cmake control the optimization flags using CMAKE_BUILD_TYPE and not add them to CMAKE_[lang]_FLAGS. To override the cmake defaults (if needed), you can reset the build related-variables like CMAKE_CXX_FLAGS_RELEASE.

Issues restarting in v1.1 -- globals not loaded

When restarting in 1.1, global variables are initialised to 0. This affects decks which use the globals (e.g. sample/harris doesn't write output after restarting, sample/asymm4sp crashes due to division by zero when checking restart_interval)

I believe it's related to this change:
#48
Should decks be modified not to use globals or is there a way to store them?

Issues with child_langmuir emitter after particle boundary interaction

The child langmuir emitter moves newly emitted and 'aged' particles by calling the particle mover as:
pm[nm].disp##X = wu##Xrd##X;
pm[nm].disp##Y = wu##Yrd##Y;
pm[nm].disp##Z = wu##Zrd##Z;
pm[nm].i = np-1;
nm += move_p( p, pm, a, g, qsp ); \

However, if a particle already had a boundary interaction in 'advance_p' it is at the top of the 'pm' list. The call of move_p in the emitter code then attempts to move the already out of bounds particle, instead of the newly emitted particle as intended.

This can be fixed by using a local particle mover, as is used in 'advance_p'. That is:

DECLARE_ALIGNED_ARRAY( particle_mover_t, 16, local_pm, 1 );
...
local_pm->disp##X = wu##Xrd##X;
local_pm->disp##Y = wu##Yrd##Y;
local_pm->disp##Z = wu##Zrd##Z;
local_pm->i = np-1;
if ( move_p( p, local_pm, a, g, qsp ) ) {
pm[nm++] = local_pm[0];
}

epic data analysis

Hi
Is there a description of native VPIC format that one can use to read/visualize data from let's say a
home written python script instead of Paraview ?
Thanks in advance
Denis

Entry in CAMPA Accelerator Simulation Codes

Hi VPIC maintainers,

In CAMPA, we recently started to create an overview catalogue of Accelerator Simulation Codes. Would you be interested to update the entry for VPIC? :)

Thanks,
Axel

Builds only possible with MPI enabled

Building with -DENABLE_MPI=OFF still results in an MPI build. This is because config/project.cmake forces ENABLE_MPI to be TRUE. Fixing this still results in build errors, since MPI dependencies are included for the compile.

RelayPolicy will not build for MPWrapper

src/util/mp/MPWrapper.h is a wrapper around DMPPolicy or RelayPolicy, depending on configuration. RelayPolicy is not a valid build as it is missing its include dependencies (e.g. ConnectionManager.h, P2PConnection.h, checkpt.h).

Input deck: electric field inside set_region_field does not work?

Hi, I am just a newbie learning how to use this code and I want to be able to introduce electric field into my simulation. In order to do that, I included set_region_field inside begin_initialization and tried to add some magnetic and electric fields. Magnetic fields worked well but there is no electric field (as I see from the Paraview visualizations). Later I tried to understand and use field(ix, iy, iz) (the declaration is at ./src/vpic.h line 290) but I couldn't understand what these parameters meant and how to use them. Any help will be appreciated!

Correction: the declaration seems to be at ./src/field_advance/standard/remote.c line 15

Catch2 test version is out of date, blocking test execution on new none x86-platforms

There was a "feature" in old catch in which it emitted some raw ASM into the executable. The previous ASM they generated is not compatible with some modern non x86-platforms but they have fixed that in a new version

It would be neat if you could update the catch version to something more modern :). Should be as simple as just updating the file

the tutorial about the vpic data

Hello,

I could compile the vpic and execute the sample application based on readme, where could I get further information such as how to visualize the simulation data base on paraview and more details about the vpic data?

Thanks a lot for your help!

vpic 2.0 - plasma focus simulation

Dear VPIC Community,

Vpic seems to me that it is a powerfull and versatile plasma code. I want to use it for my research.
Can you please answer my questions.

  1. I am new to particle in cell simulation packages. I do not know how to use this kind of packages and I could not see any manuals that explain how to use vpic.. Can you please direct me to some kind of manuals etc. for learning how to use vpic for different applications?

  2. Can I use vpic for dense plasma focus device simulation (especially pinch phase). I am mostly interested in pinch simulation. If it is possilbe, it will be great to simulate the whole plasma dynamics from beginning to until pinch phase.

  3. I saw this paper (VPIC 2.0: Next Generation Particle-in-Cell Simulations). Can we have access to VPIC 2.0 code ?

Best regards
Thanks
Yasar Ay

About post-processing data

Hi,

I want to use VPIC for my research work. I have installed it in my local linux machine. I am running using sample input deck. I see that data is printing and the output datas are being saved in different files lik ehydro hhydro field etc. Inside each folder there are several data T.0 T. 200 etc.

I want to know how to visualize these data and what type of data files are generating?

please help me.

Thank you.

Regards,
Ratan

HDF5 particle dump does not write species metadata

The regular particle dump writes some meta data using WRITE_HEADER_V0. The HDF5 dump should write similar meta data (to attributes in the HDF5 file?), including species ID, species name, charge and mass per particle, what timestep the output was written at, etc.

Collisions with uneven weights "works" but gives wrong values

Right now the TA collisions accepts a users requests to run collisions on species with different weights. This runs but does not give a correct science answer as it is not supported by the current formulation of the TA operator

There is theory work by Miller and Combi that addresses this, but needs to be implemented

For now we can just detect the bad input and error out to avoid the impression that the code is doing the right thing when it's not

Travis CI not working

The Travis CI doesn't run on GitHub. Probably best to migrate to GitHub actions anyway.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.