I have encountered a problem writing large array using a single processor. I'm running a serial code that's working with a large array. When I work with smaller arrays (64x64x64, for example) the following example works fine. My *.h5 files contain 1e-6 in every position, like it should. But when I bump up the size, my *.h5 file output just contains 0's.
program ESIO_test
use, intrinsic :: iso_c_binding
use mpi
use esio
implicit none
integer :: myrank, nprocs, ierr
real(C_DOUBLE) :: Udata(2,1024,1024,512,3)
type(esio_handle) :: h
call mpi_init(ierr)
call mpi_comm_rank(MPI_COMM_WORLD, myrank, ierr)
call mpi_comm_size(MPI_COMM_WORLD, nprocs, ierr)
call esio_handle_initialize(h, MPI_COMM_WORLD)
call esio_file_create(h,"/work/04114/clarkp/lonestar/fields/512/PS/restart00000000.h5",.true.)
Udata = 1e-6
call esio_field_establish(h, 1024, 1, 1024, 1024, 1, 1024, 512, 1, 512, ierr)
call esio_field_writev_double(h, "u", Udata(:,:,:,:,1), 2)
call esio_field_writev_double(h, "v", Udata(:,:,:,:,2), 2)
call esio_field_writev_double(h, "w", Udata(:,:,:,:,3), 2)
call mpi_barrier(MPI_COMM_WORLD, ierr)
call esio_file_close(h)
call esio_handle_finalize(h)
call mpi_finalize(ierr)
end program ESIO_test
I also modified this example to check the "ierr" flags at each step, but they remained 0.
Yes, I know that ESIO is really meant for parallel reading/writing and yes, I know that parallelizing the rest of my code would fix the problem. But up until now, the serial portion of the code has worked fine, and even with large arrays it only takes a minute to run. While switching to generic hdf5 library calls and/or making the code parallel might be better, both would require time to rewrite code and/or extra code complexity. I'd prefer to use a serial code if I can get away with it.