Giter VIP home page Giter VIP logo

libres's Introduction

⚠️ This repository is no longer maintained ⚠️

Libres has merged with it's mothership ERT 👾

libres Libres testing Code style: black

libres is part of the ERT project: Ensemble based Reservoir Tool. It is now available in PyPI:

$ pip install equinor-libres

or, for the latest development version (requires GCC/clang and Python.h):

$ pip install git+https://github.com/equinor/libres.git@master

Development

libres is meant to be installed using setup.py, directly or using pip install ./. The CMakeLists.txt exists, but is used by setup.py to generate the libres C library and by Github Actions to run C tests.

libres requires a recent version of pip - hence you are advised to upgrade your pip installation with

$ pip install --upgrade pip

If your pip version is too old the installation of libres will fail, and the error messages will be incomprehensible.

Building

Use the following commands to start developing from a clean virtualenv

$ pip install -r requirements.txt
$ python setup.py develop

Alternatively, pip install -e . will also setup libres for development, but it will be more difficult to recompile the C library.

scikit-build is used for compiling the C library. It creates a directory named _skbuild which is reused upon future invocations of either python setup.py develop, or python setup.py build_ext. The latter only rebuilds the C library. In some cases this directory must be removed in order for compilation to succeed.

The C library files get installed into python/res/.libs, which is where the res module will look for them.

Testing Python code

Install the required testing packages and run tests.

$ pip install -r test_requirements.txt
$ pytest

Testing C code

Install ecl using CMake as a C library. Then:

$ mkdir build
$ cd build
$ cmake .. -DBUILD_TESTS=ON
$ cmake --build .
$ ctest --output-on-failure

Configuration

The site_config file

As part of the installation process libres will install a file called site-config in share/ert/site-config; when ert starts this file will be loaded before the users personal config file. For more extensive use of ert it might be benefical to customize the site-config file to your personal site.

To customize, you need to set the environment variable ERT_SITE_CONFIG to point to an alternative file that will be used.

6.2 Forward models

libres contains basic functionality for forward models to run the reservoir simulators Eclipse/flow and the geomodelling program RMS. Exactly how these programs depend on the setup on your site and you must make some modifications to two files installed with libres:

6.2.1. Eclipse/flow configuration

In the Python distribution installed by libres there is a file res/fm/ecl/ecl_config.yml which is used to configure the eclipse/flow versions are available at the location. You can provide an alternative configuration file by setting the environment variable ECL_SITE_CONFIG.

6.2.2. RMS configuration

In the Python distribution installed by libres there is a file: res/fm/rms/rms_config.yml which contains some site specific RMS configuration. You should provide an alternative file with your local path to the rms wrapper script supplied by Roxar by setting the environment variable RMS_SITE_CONFIG to point to the alternative file.

libres's People

Contributors

agchitu avatar akva2 avatar andlaus avatar andreabrambilla avatar arielalmendral avatar chflo avatar epa095 avatar flikka avatar geirev avatar hnformentin avatar iloop2 avatar jepebe avatar joakim-hove avatar jokva avatar jondequinor avatar kurtpetvipusit avatar lars-petter-hauge avatar maninfez avatar markusdregi avatar mortalisk avatar myrseth avatar oysteoh avatar oyvindeide avatar patnr avatar perroe avatar pgdr avatar pinkwah avatar rolk avatar sondreso avatar xjules avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libres's Issues

Create job for filling in parameter template file

Given a template file of the following format:

FILENAME
F1 {{key1.subkey1}}
FX {{key1:subkey2}}
F1_T {{key2.subkey1}}

and a Json object like:

{
    "key1": {
                "subkey1": 1999.22,
                "subkey2": 200
            },
    "key2": {
                "subkey1": 300
            },
    "key1:subkey1":1999.22,
    "key1:subkey2":200,
    "key2:subkey1":300
}

The new job should create a new file by replacing in the key in the template file with the corresponding values from the Json object.

Based on #432

A review of the programmatic libres configuration

Context

When implemented, the structure of the dictionary configuring libres was inspired by an early version of the configuration of Everest. I've later realized that this configuration was very immature. In addition, we've learned some lessons regarding the configuration of libres lately and I want to put those in use. I hence propose a complete refactoring of the dictionary passed to ResConfig to configure libres.

General thoughts

  • I propose that we give ourselves complete freedom in this process. There is no reason to deviate from the current file based configuration just to be different, but where improvements are possible I suggest that we aim at improvement over compatibility.

  • If an invalid configuration is passed, we expect the code to say so. In some cases, this is likely to happen in C, but in general, we should aim for Python exceptions. Later, it would be nice to support some kind of validation of the configuration, but we'll probably save that for later.

  • I suggest that we pick a subset of the full space of possible libres-configurations and support only those programmatically for now. Then we can expand on this down the line on a on-demand basis.

  • Disclaimer: I'm a bit torn on this one! I suggest that we continue to pass the configuration as a Python dictionary for now. I would really like the syntax config.xxx instead of config['xxx']. In addition I'm a big fan of the immutability of for instance namedtuples. However, it is slightly more cumbersome to create something along the lines of a namedtuple for the user, no matter if they are to set up a simple configuration of libres in a test, use it in production or load a configuration as yaml or similar. If any of you have some thoughts on this one they are very much welcome. But until we have a better solution, I propose sticking with dicts.

  • Would it be an idea to expose all the configuration keys in Python as lower case versions of the C keyword. Are these keys used by anyone else but the configuration part of libres, like ConfigContent, ConfigParser and ResConfig (and its ~11 minions)? If not, the only change this would enforce is to lower case all keys in ConfigKeys.py and upper case them again when parsing the dictionary in ResConfig.

  • We should also keep in mind that this is NOT an ERT-configuration, but a libres configuration, meaning that if there are settings that are only interpreted by ERT, or possibly should only be a concern for ERT, than I think that is should not en up in the configuration below.

  • Also, if there are keys that we plan to deprecate or that is not much used, please point that out and we can leave then out of the loop for now..

What should be configurable?
Input on this one is highly appreciated!

  • jobname
  • job_script
  • Installing jobs
  • Configuring the forward model (Should we support both simulation_job and forward_model?)
  • num_realizations
  • Random seed
  • Runpath
  • Enspath
  • Runpath list
  • Log level
  • Log file
  • queue driver
  • queue max submit
  • queue max running
  • queue min realizations
  • Config directory
  • Datafile
  • Run template
  • Eclbase
  • Gen kw
  • Gen data

Runner-ups?
Should we support these as well? (Not supporting them now, is not the same as not supporting them in the near future..

  • Refcase
  • Obs_config
  • History_source
  • Summary
  • workflows

Structure of the configuration
An overview of the configuration dict that can be given to libres. I wrote it in yaml syntax for increased readability, but when passed to ResConfig this will indeed be a python dictionary of similar structure.

define:
  CASE_NAME: refactoring4real
  USER: ABCD
  SCRATCH: /scratch/my_super_disk

internals:
  config_directory: /<CASE_NAME>/the/path/to/the/config/file

model:
  num_realizations: 42
  data_file: /my/data_file.ECL
  grid: somegrid.EGRID
  eclbase: eclipse/ECL

environment:
  jobname: myname
  job_script: script.bs
  runpath: <SCRATCH>/%d
  runpath_list: .some_list
  enspath: simulation_results
  random_seed: 54786542654
  log_level: info
  log_file: the_log_file.loglog

install_jobs:
  -
    name: set_up_all
    path: ../../jobs/SET_UP_ALL
  - 
    name: compute_everything
    path: ../../jobs/COMPUTE_EVERYTHING
  -
    name: complete_analysis:
    path: ../../jobs/COMPLETE_ANALYSIS

forward_model:
  - 
    name: set_up_all
    arglist: --include everything/really
  -
    name: compute_everything
  -
    name: complete_analysis
    arglist: --output the_result

queue:
  driver: lsf
  max_submit: 2
  min_realizations: 40
  max_running: 100

data:
  gen_kw:
    -
      name: SIGMA
      template: ../input/templates/sigma.tmpl
      out_file: coarse.sigma
      parameter_file: ../input/distributions/sigma.dist
  
  run_template:
    -
      template: ../input/templates/seed_template.txt
      export: seed.txt
  
  gen_data:
    -
      name: super_data
      result_file: super_data_%d
      report_steps: 1

Make CustomKW fully internalised

Definition of done:

It is possible to:

  • load data from the forward model with CustomKW,
  • let the current enkf_main object go out of scope/end the run entirely,
  • allocate a completely fresh enkf_main from the same config file as above and load the CustomKW data internalised above.

Local QUEUE; unable to stop/kill jobs

ERT seems to be unable to stop jobs submitted with 'Local QUEUE' . User reported this as a problem for the second time. The problem is unrelated to the job and has been confirmed with rms and with flow.

To reproduce: submit job with local queuing system, kill the job from GERT and observe that thye process still runs on the machine.

Issues

Install pyaml
INstall ecl_config.yml

libres crashes when job is killed

When submitting a batch to the batch simulator, if I kill a job in process, libres crashes with util_abort.

Abort called from: job_queue_node_free_data
(libres-master/libjob_queue/src/job_node.c:198)

Error message: job_queue_node_free_data: internal error - driver spesific job
data has not been freed - will leak.
----------------------------
 #00 util_abort__(..)        in ???
 #01 job_queue_node_free(..) in ???
 #02 job_list_reset(..)      in ???
 #03 job_list_free(..)       in ???
 #04 job_queue_free(..)      in ???
 #05 ffi_call_unix64(..)     in _ctypes/libffi/src/x86/unix64.S:79
 #06 ffi_call(..)            in _ctypes/libffi/src/x86/ffi64.c:527
 #07 _ctypes_callproc(..)    in _ctypes/callproc.c:841
 #08 ???(..)                 in _ctypes/_ctypes.c:3990
 #09 PyObject_Call(..)       in abstract.c:2548

A corollary to this - which might or might not be the same root issue - is that the test: https://github.com/Statoil/libres/blob/master/python/tests/res/simulator/test_batch_sim.py#L193 fails silently - i.e. there is exit(0) and ctest is satisfied, but when run in verbose mode you see that there is some util_abort()action going on.

Possibility for user to add metadata to ERT observation file

Feature request Possibility for user to add metadata to ERT observation file

Elements of a typical observation file today:

SUMMARY_OBSERVATION THP_X-1H_2{ VALUE=407; ERROR=24; DATE=19/09/2003; KEY=WBP9:X-1H;};
GENERAL_OBSERVATION R_ABC-1AH{DATA=R_ABC-1AH_DATA; RESTART=1; OBS_FILE=../input/observations/rft/R_ABC-1AH_1986-09-10.obs;};
cat R_ABC-1AH_1986-09-10.obs

242.8  5
225.3  5
280.6  5

There is an internal workflow in Statoil which post-process RFT data (not developed by IT SI). It is somewhat cumbersome to use, when it comes to setting it up. E.g., the GENERAL_OBSERVATION R_ABC-1AH in the example above needs a corresponding file called R_ABC-1AH.txt in the same folder, with content

cat R_ABC-1AH.txt

404131.72    7213542.64    4074.90    3014.26     G22
403577.27    7214600.92    5125.10    3049.13     G13
403893.25    7214280.17    5273.70    3424.03     G22

The script assumes this file exists. Columns are x coordinate, y coordinate, measured depth (MD), vertical depth (TVD) og zone name, for the different data points. The lines in these two files should correspond line by line. In addition, the date of the data points (even though there could easily be several days between when the data points were taken) is part of the OBS_FILE filename.

The reason this solution was chosen is probably that it is not possibly to add metadata (or comments to a OBS_FILE) in the ERT observation file. RFT was one example of GENERAL_OBSERVATION usage. Another common type of GENERAL_OBSERVATION is time-lapse gravity data, where metadata includes survey dates (base and repeat), and station names.

If metadata is possible to add, another common use case would be to add comments to some of the observations. The person setting up the observation file typically acquire knowledge about the quality of the data points. It would be nice if this knowledge could be stored in the setup as metadata (it could then also be visualized in other postprocessing tools like webviz). Today this knowledge is often lost (or hidden in some modelling pdf-report) if the person setting up the observation file leaves the modelling project.

A workaround for the end user could of course be to create another observation file (which supports metadata), and then have a script taking this as input and creating ERT-obs-file as output. However, that feels like a hack (and you will need to run this before ERT every time the observation file is changed, in order to be in sync). I.e. the best is probably to have the opportunity of adding generic metadata to the observation file format in ERT.

Difference between <GEO_ID>, %d and <IENS>

I am creating an ert config file for carrying out an ensemble experiment and am a bit confused between the terms <GEO_ID>, %d and IENS. I would appreciate it if you could help me understand the difference between these terms and what to use when:

  1. Using the DEFINE Keyword
  2. With ECLBASE
  3. Within JOBS, e.g. COPY_FILE
  4. Inside the Eclipse DATA file (e.g. INCLUDE ../real-<GEO_ID>/include)

Gracefully fail instead of `Killed`

Certain errors, especially regarding forward models (Python scripts called from C) causes res to be Killed.

Try to die in a nicer fashion with a message telling which job is responsible for the imminent death.

Update init_update function signature

The function signature used in analysis modules: https://github.com/Statoil/libres/blob/master/lib/include/ert/analysis/analysis_table.hpp#L46

Should get an extra argument after ens_mask so that the new signature becomes:

typedef void (analysis_init_update_ftype) (void * module_data,
                                             const bool_vector_type * ens_mask ,
                                             const bool_vector_type * obs_mask,   /* New argument */ 
                                             const matrix_type * S ,
                                             ...

The obs_mask argument should be create with obs_data_alloc_mask().

Improvements to Localization

We discussed today how to improve localization. In general, we observed that most scripts contained

class Localize(ErtScript):
    def run(self):
        ert = self.ert
        # ... some boilerplate setup
        get_region()
        get_field()
        do_stuff()
        addMinistep()
        # some boilerplate tear down

One immediate improvement could be to provide a context manager, so that the user's code becomes like this:

class Localize(ErtScript):
    def run(self):
        with localize(regions=3, surfaces=0, genkw=2) as params:
            do_stuff(params.regions[0])
            do_stuff(params.regions[1])
            do_stuff(params.regions[2])
            do_stuff(params.genkw[0])
            do_stuff(params.genkw[1])

This could perhaps be made into a decorator, I'm not sure exactly that would look, though

class Localize(ErtScript):
    @localize(regions=3, surfaces=0, genkw=2)
    def run(self):
        do_stuff(params.regions[0])
        do_stuff(params.regions[1])
        do_stuff(params.regions[2])
        do_stuff(params.genkw[0])
        do_stuff(params.genkw[1])

Or, we could think more outside the box and just mimick unittest. In this case, we'll just use some reflection and we will run every function prefixed with localize_ and we'll do the setUp and tearDown automatically. We could also provide functionality for users to implement these, like in unittest, also adding a setUpClass in case several functions share functionality.

class Localize(ErtScript):
    def localize_reg0(self):
        do_stuff()

    def localize_reg1(self):
        do_stuff()

    def localize_my_own_kw(self):
        do_stuff()

These are just some of the ways we could consider to go, I'm sure there are other and better alternatives.

try open status file

This happened when running Everest:

seba: Simulation "Batch1_Grad1_Func1" started.
Traceback (most recent call last):
  File "/project/res/komodo/unstable/root/bin/job_dispatch.py", line 409, in <module>
    main( sys.argv )
  File "/project/res/komodo/unstable/root/bin/job_dispatch.py", line 365, in main
    job_manager.completeStatus(exit_status, error_msg, job=job)
  File "/project/res/komodo/unstable/root/lib/python2.7/site-packages/res/job_queue/job_manager.py", line 285, in completeStatus
    with open(self.STATUS_file, "a") as f:
IOError: [Errno 2] No such file or directory: 'STATUS'

Should we check if the file exists before opening it, and if not, try to wait a while or return an error message?

Forward model jobs: EXEC_ENV should special case '<...>' and 'none'

The forward model job description files have a keyword EXEC_ENV which can be used to populate the environment which is eventually passed to the os.execve( ) function to invoke the forward model.

The issue here is to add some special case code to this setting:

  1. If the value is of the form: <...> the setting is ignored. The purpose of this is to faiciltate the situation where <VALUE> replacement key is used - and the user has not passed a value for this value.

  2. Special treatment of string null - that should be used to insert NULL in the dictionary, which will eventually be interpreted as an unsetenv.

Backport to 2.3

BatchSimulator is not yet compatible with everest

Everest will generate a res_config which includes gen_kw for controls and gen_data for functions.
BatchSimulator.start() will fail when trying to add control_names and/or results with the same name in the same res_config and it will raise an exception.

tests fail if file exists

Some libres tests fail if the user does not have write access to /tmp/torqueue_debug.txt or /tmp/log/log.txt.

Fix warnings

Please, everybody, take note when new warnings are introduced!

Fix warnings

res/libenkf/src/model_config.c: In function ‘model_config_init’:
res/libenkf/src/model_config.c:454:40: warning: initialization discards ‘const’
qualifier from pointer target type [-Wdiscarded-qualifiers]
      config_content_node_type * node = config_content_iget_node( config , i);
                                        ^~~~~~~~~~~~~~~~~~~~~~~~
res/libenkf/src/site_config.c: In function ‘site_config_init’:
res/libenkf/src/site_config.c:581:44: warning: pointer targets in passing
argument 2 of ‘util_sscanf_octal_int’ differ in signedness [-Wpointer-sign]
     if (util_sscanf_octal_int(string_mask, &umask_value))
                                            ^
In file included from res/libenkf/src/site_config.c:25:0:
ecl/lib/include/ert/util/util.h:236:16: note: expected ‘int *’ but argument is
of type ‘mode_t * {aka unsigned int *}’
   bool         util_sscanf_octal_int(const char * buffer , int * value);
                ^~~~~~~~~~~~~~~~~~~~~

BUG in std_scale_correlated_obs

Issue repprted by @ttekttek - probably due to a STD_SCALE_CORRELATED_OBS job with only one observation in it - that has (probably given some) divisiion by zero : 1/(n - 1) related problem. Does not make sense to scale only obs - should just return with a message in that case.

project/snorre/reservoirmodels/ff/2018a/users/reseng_310118/ert/model> ert --text MLY_2018a_15_stability_3.ert ALL_OBSERVATIONS_20180323_SVD_WF > all_obs.txt
Failed to load symbol:enkf_tui_plot_all_summary_JOB Error:/project/res/komodo/2018.03.02/root/bin/ert_tui: undefined symbol: enkf_tui_plot_all_summary_JOB
** Warning: failed to add workflow job:PLOT_ALL_SUMMARY from:/project/res/komodo/2018.03.02/root/share/ert/workflows/jobs/internal-tui/config/PLOT_ALL_SUMMARY

Error message: Program received signal:8

See file: /tmp/ert_abort_dump.ttek.20181119-122322.log for more details of the crash.
Setting the environment variable "ERT_SHOW_BACKTRACE" will show the backtrace on stderr.
cat /tmp/ert_abort_dump.ttek.20181119-122322.log^CAbort (core dumped)
[1]  + Done                          emacs SOME_OBSERVATIONS_20180323_SVD_WF
/project/snorre/reservoirmodels/ff/2018a/users/reseng_310118/ert/model>
/project/snorre/reservoirmodels/ff/2018a/users/reseng_310118/ert/model> cat /tmp/ert_abort_dump.ttek.20181119-122322.log


Abort called from: util_abort_signal (/var/jenkins/jenkins-be/workspace/komodo/cache/libecl-2.3.1/lib/util/util.c:4408)

Error message: Program received signal:8



--------------------------------------------------------------------------------
#00 util_abort__(..)                                                              in ??:0
#01 ???(..)                                                                       in sigaction.c:0
#02 enkf_obs_scale_correlated_std(..)                                             in ??:0
#03 enkf_main_std_scale_correlated_obs_JOB(..)                                    in ??:0
#04 workflow_run(..)                                                              in ??:0
#05 ert_workflow_list_run_workflow__(..)                                          in ??:0
#06 enkf_main_run_workflows(..)                                                   in ??:0
#07 main(..)                                                                      in ??:0
#08 __libc_start_main(..)                                                         in ??:0
#09 ???(..)                                                                       in ??:0
--------------------------------------------------------------------------------

Fix bug in workflow job enkf_job_scale_std_correlated:

This trace-back/bug comes from a situtation where some realisations have failed and there is missing data. We must be more robust to that in the function: enkf_main_std_scale_correlated_obs_JOB()

Abort called from: __hash_get_node (/var/jenkins/jenkins-be/workspace/komodo/cache/libecl-master/lib/util/hash.cpp:90)

Error message: __hash_get_node: tried to get from key:WOPT:A-16HP.115 which does not exist - aborting



--------------------------------------------------------------------------------
 #00 util_abort__(..)                                                              in ??:0
 #01 ???(..)                                                                       in hash.cpp:0
 #02 hash_get(..)                                                                  in ??:0
 #03 block_fs_fread_realloc_buffer(..)                                             in ??:0
 #04 ???(..)                                                                       in block_fs_driver.cpp:0
 #05 ???(..)                                                                       in enkf_node.cpp:0
 #06 enkf_obs_get_obs_and_measure_node(..)                                         in ??:0
 #07 enkf_obs_get_obs_and_measure_data(..)                                         in ??:0
 #08 enkf_obs_scale_correlated_std(..)                                             in ??:0
 #09 enkf_main_std_scale_correlated_obs_JOB(..)                                    in ??:0
 #10 ffi_call_unix64(..)                                                           in /var/jenkins/jenkins-be/workspace/komodo/cache/Python-2.7.14/Modules/_ctypes/libffi/src/x86/unix64.S:79
 #11 ffi_call(..)                                                                  in /var/jenkins/jenkins-be/workspace/komodo/cache/Python-2.7.14/Modules/_ctypes/libffi/src/x86/ffi64.c:527

snake_oil fails to run

test-data/local/snake_oil/snake_oil.ert

Replacing JOBNAME with ECLBASE is a workaround

Crash on log level CRITICAL

Activity will be logged to ..............: (null)

Error message: log_add_message: logh->stream == NULL - must call
log_reset_filename() first

See file: /tmp/ert_abort_dump.pgdr.20180118-160917.log for more details of the crash.
Setting the environment variable "ERT_SHOW_BACKTRACE" will show the backtrace on stderr.
Aborted (core dumped)
[pgdr@be-lx885510 everest/model]$ cat /tmp/ert_abort_dump.pgdr.20180118-160917.log

Abort called from: log_add_message
(...cache/libres-master/libres_util/src/log.c:160)

Error message: log_add_message: logh->stream == NULL - must call
log_reset_filename() first


----------------------------------------
 #00 util_abort__(..)                   in ???
 #01 log_add_message(..)                in ???
 #02 res_log_add_fmt_message(..)        in ???
 #03 rng_manager_log_state(..)          in ???
 #04 rng_config_alloc_rng_manager(..)   in ???
 #05 enkf_main_rng_init(..)             in ???
 #06 enkf_main_alloc(..)                in ???
 #07 ffi_call_unix64(..)                in /var...
 #08 ffi_call(..)                       in /var...
 #09 _ctypes_callproc(..)               in /var...
 #10 ???(..)                            in /var...
 #11 PyObject_Call(..)                  in /var...
 #12 PyEval_EvalFrameEx(..)             in /var...

Regression in logging?

I'm afraid #280 introduced a regression in libres due to initializing logging before changing directory into the wanted working dir.

$ cat /tmp/ert_abort_dump.jenkins.20180507-091319.log

Abort called from: util_fopen
(/var/jenkins/jenkins-be/workspace/komodo/cache/libecl-master/lib/util/util.c:4308)

Error message:
util_fopen: failed to open:log.txt with mode:'a+' - error:Read-only file system(30)


----------------------------------------
 #00 util_abort__(..)                   in ??:0
 #01 util_fopen(..)                     in ??:0
 #02 util_mkdir_fopen(..)               in ??:0
 #03 log_open(..)                       in ??:0
 #04 res_log_init_log(..)               in ??:0
 #05 res_log_add_message(..)            in ??:0
 #06 ???(..)                            in res_log.c:0
 #07 res_log_fwarning(..)               in ??:0
 #08 config_parser_add_key_values(..)   in ??:0
 #09 ???(..)                            in config_parser.c:0
 #10 config_parse(..)                   in ??:0

remove build GUI stuff

Fix the cause of this

ERROR: test_import_gui (tests.global.test_import_gui.ImportGUI)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/private/pgdr/statoil/libres/build/lib/python2.7/site-packages/tests/global/test_import_gui.py", line 26, in test_import_gui
    self.assertTrue( self.import_package( "ert_gui" ))
  File "/private/pgdr/statoil/libecl/build/install/lib/python2.7/site-packages/ecl/test/import_test_case.py", line 32, in import_package
    module = self.import_module( package )
  File "/private/pgdr/statoil/libecl/build/install/lib/python2.7/site-packages/ecl/test/import_test_case.py", line 28, in import_module
    mod = importlib.import_module( module )
  File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
ImportError: No module named ert_gui

----------------------------------------------------------------------

Debug text from: enkf_obs_scale_correlated_std

The workflow function: enkf_main_std_scale_correlated_obs_JOB can be used to estimate correlations among observations and then subsequently inflate the observation std to take account of the correlations.

The key function here is: enkf_obs_scale_correlated_std()

The task is to create some basic text output from this function which can be used to check the scaling factor calculated. This is interesting output, and in the future a "good" output module should probably be devised, for now we can go with printf. This should be included in 2.3 - and maybe even backported to stable.

queue not open and not ready for use

Abort called from: job_queue_check_open ($libres/libjob_queue/src/job_queue.c:820)

Error message: job_queue_check_open: queue not open and not ready for use; method job_queue_reset must be called before using the queue - aborting

 #00 ???(..)                       in ~libecl/lib/util/util_abort_gnu.c:170
 #01 util_abort__(..)              in ~libecl/lib/util/util_abort_gnu.c:303
 #02 job_queue_check_open(..)      in ~libres/libjob_queue/src/job_queue.c:821
 #03 job_queue_run_jobs(..)        in ~libres/libjob_queue/src/job_queue.c:864
 #04 job_queue_run_jobs__(..)      in ~libres/libjob_queue/src/job_queue.c:1040
 #05 ????
 #06 clone(..)                     in ???

Moving Matrix to libanalysis

We could consider moving the different matrix types from
libres_util to libanalysis, that is, moving the files

libres_util/include/ert/res_util/matrix_blas.h
libres_util/include/ert/res_util/matrix.h
libres_util/include/ert/res_util/matrix_lapack.h
libres_util/include/ert/res_util/matrix_stat.h
libres_util/include/ert/res_util/regression.h
libres_util/src/matrix_blas.c
libres_util/src/matrix.c
libres_util/src/matrix_lapack.c
libres_util/src/matrix_stat.c
libres_util/src/regression.c
libres_util/tests/ert_util_matrix.c
libres_util/tests/ert_util_matrix_lapack.c
libres_util/tests/ert_util_matrix_stat.c

to libanalysis.

That way, we are better able to isolate the libanalysis library as a meaningful library which is simultaneously simpler to interact with.

Ability to pass environment variables to forward model jobs.

This problem has come about due to RMS which has an embedded Python3 interpreter which can be used to write Python plugins, and the interaction with a Python2 environment. The situation is:

  1. The user has created package with Python3 modules, located at $USER_PYTHON3_PATH
  2. The script used by ERT to start up rms needs access to the normal ert Python2 environment during runtime.

The final step when the ert rms script starts the real rms is:

os.execv( path_to_rms )

my suggestion is to allow a new keyword EXEC_ENV to be added to the job description files like:

EXEC_ENV  PYTHONPATH /some/python/path

then this env variable can be passed in the final exec call.

Stuff from ert-statoil

Forward models:

  1. Should use the rms functionality and remove it from ert-statoil.
  2. Should use the shell utility stuff in libres and remove it from ert-statoil.

site-config:
Currently the site-config file is taken verbatim from ert-statoil and the file found in libres is completely ignored. We should start with the one in libres and then only append site-specific content from ert-statoil.

lsf info file

Create a small file - e.g. lsf_info.json in the runpath folder which an external program can use to infer information about the (lsf) process.

RMS f..up

This problem/bug/... is really in ert-statoil.

16:20:02 RMS_BATCH Could not find target_file: rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:pro c.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with er ror "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDeni ed (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='r ms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSP id:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed w ith error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.Acce ssDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, n ame='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.ha veRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() fa iled with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psuti l.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21 247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_ run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.ex e() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied ( pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms') ".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:p roc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDe nied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name= 'rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRMSPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')".rms_run.haveRM SPid:proc.exe() failed with error "psutil.AccessDenied (pid=21247, name='rms')". /scratch/fmu/bisi/3_r002_reek/realization-0/iter-0/rms.stderr.33

Consider adding a "stdout log level"

The current log level writes every message at that level or higher to file.

Should we have a different "stdout level" that makes every message over a given threshold written to stdout?

e.g.

LOG_LEVEL = INFO
LOG_STDOUT = ERROR

could write every message with ERROR or higher to terminal, whereas ever message with INFO or higher is written to the log file.

Rempve fs info from job_dispatch

The job_dispatch script logs quite a lot of file-system info and sends it to kibana. That logging started at a time with much infrastructure problems. Not required any longer and should be removed.

Removing support for column major order

In the Matrix library, we support row and column major order.

If we don't use this feature, we should consider removing it and sticking to one specific order (the default).

If, however, we do support and use this functionality, I think this is a bug that should be fixed, or at the very least a function that could be improved.

void matrix_fscanf_data( matrix_type * matrix , bool row_major_order , FILE * stream ) {
 int row,col;
 if (row_major_order) {
   for (row = 0; row < matrix->columns; row++) {
     for (col = 0; col < matrix->columns; col++) {
       __fscanf_and_set( matrix , row , col ,stream);
     }
   }
 } else {
   for (row = 0; row < matrix->columns; row++) {
     for (col = 0; col < matrix->columns; col++) {
       __fscanf_and_set( matrix , row , col , stream);
     }
   }
 }
}

BLOCK_OBSERVATION with source=summary cause ERT to segfault without a grid

I have managed to get a segfault by:
Configure ERT without a grid and with BLOCK_OBSERBATIONS with source=summary. Trace back indicates that ERT is trying to read the grid even though this is not necessary.

Abort called from: util_abort_signal (/var/jenkins/jenkinsbe/workspace/komodo/cache/libecl-2.2.8/lib/util/util.c:4959)

Error message: Program received signal:11


#00 util_abort__(..) in ??:0
#1 ???(..) in sigaction.c:0
#2 ecl_grid_ijk_valid(..) in ??:0
#3 block_obs_alloc_complete(..) in ??:0
#4 obs_vector_alloc_from_BLOCK_OBSERVATION(..) in ??:0
#5 enkf_obs_load(..) in ??:0

Bug running Ensemble smoother

  1. Start a fresh case.
  2. Select Ensemble Smoother run.
  3. Stop - or - complete the current run.
  4. Restart ert - now current case should be: "smoother_run".
  5. Select "Ensemble smoother" run:
Abort: run-arg.cpp:77
Error message: run_arg_alloc: internal error - can not have sim_fs == update_target_fs

Jinja2 templating in libres

Today there is a job in Everest making Jinja2 templating easily available to the users. Should we move this job to libres to also allow ERT users to utilise this?

A templating functionality of this kind was requested on yammer..

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.