Giter VIP home page Giter VIP logo

spine-engine's People

Contributors

erkkar avatar jkiviluo avatar manuelma avatar pekkasavolainen avatar piispah avatar richiebrady avatar sjvrijn avatar soininen avatar suvayu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

spine-engine's Issues

Where should 'execution code' live?

In GitLab by @manuelma on Nov 27, 2019, 13:41

The current version of Spine Engine does not hold any execution code. SpineEngine.run just calls the execute method of the project items passed as arguments to the constructor.

The question arises whether we want to move that code to Spine Engine somehow, you know, for better isolation. Is that a goal for us? It may be a bit inconvenient for Project item plugin authors to have to write code in two parts: GUI management code in Spine Toolbox, and execution code in Spine Engine.

Another possibility is to just move tool instance execution code (ToolInstance, ExecutionManager, maybe ToolSpecification) to Spine Engine, and leave ProjectItem.execute where it is now.

Thoughts?

Engine hangs if persistent process dies

If the persistent process dies in the middle of a command or tool execution, we never recover. This happens eg when the linux kernel kills a julia process that's using too much memory.

ProjectItemResource enhancements

The ProjectItemResource class has been growing organically and I feel some cleanup might be in order.

  • The only use for the provider field seems to be to query the provider's name. We even exchange the real provider with a private _ResourceProvider class behind the scenes. We could just change provider to provider_name which stores the name only.
  • The "label" metadata field has become the way to identify a resource. "label" could be lifted from the metadata dictionary to an actual field in ProjectItemResource.
  • We could use label instead of the "pattern" metadata for resources that should be under a common name. The file pattern concept needs to be generalized to 'multiple files under common label' for the sake of the new Exporter anyhow.

Allow do_work() to return item specific data to execute()

Most executable items' business logic is contained in a do_work() function which is run in a separate process to facilitate parallelism. Currently, the process returns a single status flag. Some project items (GdxExporter, the future general purpose exporter) might benefit if additional (picklable) data could be returned from do_work()

The plan here is to make do_work() return a tuple of at least one element which must be the status flag. The rest of the elements can then be whatever item specific data do_work() might return.

Experimental Engine fails with a simple batch script

In GitLab by @ererkka on Nov 12, 2020, 09:45

Steps to reproduce

  1. Enable Experimental Engine
  2. Add a Tool item running a batch script containing just echo Hello
    (Could be a Bash script as well, could not test.)
  3. Execute

Expected

See ‘Hello’ printed to Process Log

Result

Event Log:

-------------------------------------------------
Executing Selected Directed Acyclic Graphs
-------------------------------------------------
Starting DAG 1/1
Order: Tool 1
DAG 1/1 failed

Event Log -- Tool 1:

[12-11-2020 09:43:41] Executing Tool Tool 1
[12-11-2020 09:43:41] ***
[12-11-2020 09:43:41] *** Executing Tool specification hello in source directory ***
[12-11-2020 09:43:41] *** Starting instance of Tool specification hello ***

No traceback, nothing is printed in the Process Log.

Filtered resource expansion doesn't work when executing an item that outputs multiple resources in parallel

Execution is broken when an item runs in parallel and produces more than one output resource. There are two problems:

  • SpineEngine._execute_item() scrambles the output resources upon returning. I believe we need to remove the zip stuff and return the output resource lists as is.
  • SpineEngine._filtered_resources_iterator() breaks the resource lists provided by previous item in such a way that the application of product() results in incorrect combinations of filtered_forward_resources. We need to separate such lists from possibly expandable resources and handle them in a specific way in product() to get the correct combinations.

Parallelize Engine

In GitLab by @soininen on Oct 9, 2020, 15:28

The Engine is supposed to be able to run multiple DAGs as well as branches in a single DAG in parallel.

The first task would probably be to find out how this could be achieved in dagster.

Experimental Engine has problems with non UTF-8 characters in cmd.exe output

In GitLab by @PekkaSavolainen on Nov 11, 2020, 10:57

Steps to reproduce:

  1. Enable experimental engine
  2. Create a Gimlet that runs 'dir' in cmd.exe and execute

Expected result:

Process log should list the contents of the current directory

Instead:

  1. Process log lists the command output until a non UTF-8 character is encountered
  2. This traceback appears:
Exception in thread Thread-56:
Traceback (most recent call last):
  File "C:\Python36\lib\threading.py", line 916, in _bootstrap_inner
    self.run()
  File "C:\Python36\lib\threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Python36\lib\site-packages\spine_engine\execution_managers.py", line 90, in _log_stdout
    self._logger.msg_proc.emit(line.decode("UTF8").strip())
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 28: invalid start byte

Issue 8 remove engine qt signals - [merged]

In GitLab by @RichieBrady on Sep 17, 2020, 14:01

Merges issue_8_remove_engine_qt_signals -> master

New branch addresses issue 8 by replacing Qt signals with an event based publisher class.

Also addressed is issue 9 allowing the use of dagster >= 0.9.0.

Engine version has been bumped from 0.5.0 to 0.6.0.

The merge will break communication between toolbox and engine until the toolbox has been updated with the subscriber classes required by the engine event publisher.

Persistent process socket server closing connection

This, and the user doesn't get any hint that something bad happened. The tool should fail or something...

c:\Workspace\Spine\spinetoolbox>python spinetoolbox.py
Exception in thread Thread-10:
Traceback (most recent call last):
  File "C:\Python\Python38\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "C:\Python\Python38\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "c:\workspace\spine\spinetoolbox\src\spine-engine\spine_engine\spine_engine.py", line 545, in _execute_item_filtered
    item_finish_state = item.execute(filtered_forward_resources, filtered_backward_resources)
  File "c:\workspace\spine\spinetoolbox\src\spine-items\spine_items\tool\executable_item.py", line 480, in execute
    return_code = self._tool_instance.execute()
  File "c:\workspace\spine\spinetoolbox\src\spine-items\spine_items\tool\tool_instance.py", line 187, in execute
    ret = self.exec_mngr.run_until_complete()
  File "c:\workspace\spine\spinetoolbox\src\spine-engine\spine_engine\execution_managers\persistent_execution_manager.py", line 726, in run_until_complete
    for msg in self._persistent_manager.issue_command(cmd):
  File "c:\workspace\spine\spinetoolbox\src\spine-engine\spine_engine\execution_managers\persistent_execution_manager.py", line 178, in issue_command
    cmd = self.make_complete_command(cmd)
  File "c:\workspace\spine\spinetoolbox\src\spine-engine\spine_engine\execution_managers\persistent_execution_manager.py", line 151, in make_complete_command
    result = self._communicate("is_complete", cmd)
  File "c:\workspace\spine\spinetoolbox\src\spine-engine\spine_engine\execution_managers\persistent_execution_manager.py", line 300, in _communicate
    response = s.recv(1000000)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

Support for ExecutionItems - [merged]

In GitLab by @soininen on Apr 6, 2020, 15:17

Merges execution_items -> master

This MR provides support for the ExecutionItem approach in Toolbox, see toolbox!48. This should be merged in concert with that MR otherwise Toolbox breaks.

Engine server

In GitLab by @manuelma on Nov 7, 2020, 14:03

Cousin of spine-tools/Spine-Toolbox#836

The first task is to setup a naive local engine server that just communicates through sockets.

Once that works, we can start thinking about the remote engine which adds a couple of challenges:

  1. Share files back and forth between the local and the remote, especially when those files are outside the project folder.
  2. Start a jupyter kernel in the remote engine and connect to it from the local toolbox.

Allow execution of selected items - [merged]

In GitLab by @soininen on Jan 2, 2020, 10:04

Merges issue_5_selected_execution -> master

This MR adds the ability to execute selected items only in a DAG.

Resolves #5

This is an API breaking change and should be merged at the same time as the corresponding toolbox!40.

v0.5.0 test fails

In GitLab by @fabianoP on Apr 9, 2020, 08:27

Hi I just notice that the tests on the recently changed version are failing.

Execute selected items only [REPLACEMENT ISSUE]

The original issue

Id: 5
Title: Execute selected items only

could not be created.
This is a dummy issue, replacing the original one. It contains everything but the original issue description. In case the gitlab repository is still existing, visit the following link to show the original issue:

TODO

Executable items should recreate item data directories

Item data directories are automatically created when project items are loaded in Toolbox. The execution items, on the other hand, expect the directories to exists already. This may not be the case when Toolbox is running in headless mode as the Toolbox side project items are never constructed making execution fail. Therefore execution items should make the data dirs as needed.

Keep execution processes alive

When executing python or julia tools on jupyter kernel, we keep the kernel alive in between engine runs.

For some reason though, we don't do it for the subprocess execution. We kill the process each time, whereas we could absolutely keep them alive and interact with them using Popen.stdin, Popen.stdout, etc. This means compilation work is preserved in Julia, and could be huge for parallel filter execution (scenarios).

So I propose we refactor a bit the StandarExecutionManager class so it follows more or less the same protocol as the KernelExecutionManager. That is, keeping processes alive, only shutting them down if the python or julia have changed in settings, or if toolbox is closed.

node_successors is not really needed

The connections argument to SpineEngine.__init__() makes node_successors redundant since we can compute the successors from connections. We could drop node_successors form __init__() to make the call signature simpler.

Execute selected items only [REPLACEMENT ISSUE]

The original issue

Id: 5
Title: Execute selected items only

could not be created.
This is a dummy issue, replacing the original one. It contains everything but the original issue description. In case the gitlab repository is still existing, visit the following link to show the original issue:

TODO

Julia tools running scenarios in parallel need to compile the same code multiple times

In GitLab by @manuelma on Dec 5, 2020, 23:13

When we automatically parallelize execution of scenarios, each tool instance needs to run in a separate process or kernel, otherwise they can't run in parallel.

That's ok, but the problem is julia tools need to compile the code the first time it's run. So if we spawn multiple julia sessions to run the same code, they will all be running the same compilation statements. There must be a way to compile only once into a system image, and then use that system image in all other processes to save time.

Time stamp for execution filters in loops

When running a tool in a loop and writing results to a DB, each iteration of the loop writes to a different alternative. This is because the time stamp for the execution filter is generated at the moment of running the tool.

But that makes results difficult to explore. One ends up with one alternative per iteration of the loop which is insane. I think the tool should write to the same alternative regardless of the iteration, because it's the same 'execution'. This can be easily solved by generating a single timestamp when starting the DAG execution and using that for all execution filters.

Re spine-tools/Spine-Database-API#152

Example of optimisation workflow [REPLACEMENT ISSUE]

The original issue

Id: 1
Title: Example of optimisation workflow

could not be created.
This is a dummy issue, replacing the original one. It contains everything but the original issue description. In case the gitlab repository is still existing, visit the following link to show the original issue:

TODO

Support for dagster v0.9 and newer [REPLACEMENT ISSUE]

The original issue

Id: 9
Title: Support for dagster v0.9 and newer

could not be created.
This is a dummy issue, replacing the original one. It contains everything but the original issue description. In case the gitlab repository is still existing, visit the following link to show the original issue:

TODO

Traceback when using dagster 0.10.1

In GitLab by @manuelma on Jan 27, 2021, 09:07

When using dagster 0.10.1, the following happens upon any execution:

Traceback (most recent call last):
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.7/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/manuelma/Codes/spine/engine/spine_engine/spine_engine.py", line 177, in run
    for event in execute_pipeline_iterator(self._pipeline, run_config=run_config):
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.7/site-packages/dagster/core/execution/api.py", line 793, in __iter__
    execution_plan=self.execution_plan, pipeline_context=self.pipeline_context,
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.7/site-packages/dagster/core/execution/api.py", line 717, in _pipeline_execution_iterator
    for event in pipeline_context.executor.execute(pipeline_context, execution_plan):
  File "/home/manuelma/Codes/spine/engine/spine_engine/multithread_executor/multithread.py", line 97, in execute
    event_or_none = next(step_iter)
  File "/home/manuelma/Codes/spine/engine/spine_engine/multithread_executor/multithread.py", line 157, in execute_step_in_thread
    step_key=step_key,
TypeError: engine_event() got an unexpected keyword argument 'step_key'

QueueLogger cannot be pickled for multiprocessing

Executing a project item which uses ReturningProcess fails with the following Traceback:

Traceback (most recent call last):
File "<string>", line 1, in <module>
File "lmultiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "spine_engine\utils\queue_logger.py", line 128, in __getattr__
if self._silent and name != "prompt":
File "spine_engine\utils\queue_logger.py", line 128, in __getattr__
if self._silent and name != "prompt":
File "spine_engine\utils\queue_logger.py", line 128, in __getattr__
if self._silent and name != "prompt":
[Previous line repeated 993 more times]
RecursionError: maximum recursion depth exceeded

Issue toolbox792 spine items as dependency - [merged]

In GitLab by @manuelma on Sep 28, 2020, 22:01

Merges issue_toolbox792_spine_items -> master

This branch starts using spine_items as a dependency.

This should be merged together with toolbox!59. Please refer to toolbox!59 for details about the decisions involved in this change.

To test, just create a new virtual environment and install spine_engine. It should install spine_items but not pyside2.

Cc @PekkaSavolainen @soininen @ererkka @RichieBrady

Loop execution sometimes runs questionable items

Consider the following dummy workflow with jumps:
chunks

In my mind, this should be 'chunked' as follows:

  1. s, d, and g
  2. a alone
  3. b and e, with jump from e to b
  4. h, with jump from h to a
  5. c, f, and i

However, at the moment it's chunked like this:

  1. s alone
  2. a alone
  3. d, b, and e, with jump from e to b
  4. g and h, with jump from h to a
  5. c, f, and i

The problems are chunks 3 and 4 in this last list, the ones with jumps. For example, 3 will execute d at each iteration of the loop. I don't want that, because d can be an importer that I just want to run once before the looping. Running it for each iteration would corrupt the state of my iteration.

Support for dagster v0.9 and newer [REPLACEMENT ISSUE]

The original issue

Id: 9
Title: Support for dagster v0.9 and newer

could not be created.
This is a dummy issue, replacing the original one. It contains everything but the original issue description. In case the gitlab repository is still existing, visit the following link to show the original issue:

TODO

UserWarning from dagster 0.12.8

Something we do in Spine Engine is probably causing this warning when executing a Python Tool spec in Spine Toolbox's Jupyter Console. This appeared after we updated the required dagster version to "dagster>=0.12, <0.12.9",

I'm on dagster v0.12.8

C:\Python38\lib\site-packages\dagster\core\execution\context\output.py:247: UserWarning: `OutputContext.get_run_scoped_output_identifier` is deprecated. Use `OutputContext.get_output_identifier` instead.
  warnings.warn(

Execution still works, so this is not critical.

PlanOrchestrationContext import error on dagster 0.11

In setup.py we have dagster>=0.11, <0.12.9, but with 0.11 I get the below. I fixed it by installing 0.12.8. Not sure if I did something wrong or we have an issue @soininen ?

======================================================================
ERROR: test_self_jump_succeeds (tests.test_SpineEngine.TestSpineEngine)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/manuelma/Codes/spine/engine/tests/test_SpineEngine.py", line 282, in test_self_jump_succeeds
    engine.run()
  File "/home/manuelma/Codes/spine/engine/spine_engine/spine_engine.py", line 248, in run
    self._chunked_run()
  File "/home/manuelma/Codes/spine/engine/spine_engine/spine_engine.py", line 273, in _chunked_run
    self._run_chunks_backward(dagster_instance, run_id, run_config)
  File "/home/manuelma/Codes/spine/engine/spine_engine/spine_engine.py", line 290, in _run_chunks_backward
    for event in reexecute_pipeline_iterator(
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.8/site-packages/dagster/core/execution/api.py", line 825, in __iter__
    yield from self.execution_context_manager.prepare_context()
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.8/site-packages/dagster/utils/__init__.py", line 417, in generate_setup_events
    obj = next(self.generator)
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.8/site-packages/dagster/core/execution/context_creation_pipeline.py", line 236, in event_generator
    execution_context = self.construct_context(
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.8/site-packages/dagster/core/execution/context_creation_pipeline.py", line 295, in construct_context
    create_executor(context_creation_data), Executor, "Must return an Executor"
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.8/site-packages/dagster/core/execution/context_creation_pipeline.py", line 406, in create_executor
    return context_creation_data.executor_def.executor_creation_fn(
  File "/home/manuelma/Codes/spine/engine/spine_engine/multithread_executor/executor.py", line 50, in multithread_executor
    from spine_engine.multithread_executor.multithread import MultithreadExecutor
  File "/home/manuelma/Codes/spine/engine/spine_engine/multithread_executor/multithread.py", line 25, in <module>
    from dagster.core.execution.context.system import PlanOrchestrationContext
ImportError: cannot import name 'PlanOrchestrationContext' from 'dagster.core.execution.context.system' (/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.8/site-packages/dagster/core/execution/context/system.py)

Emit start/finished signals while executing nodes - [merged]

In GitLab by @soininen on Jan 23, 2020, 09:04

Merges start_stop_signals -> master

This MR add dag_node_execution_started and dag_node_execution_finished signals to Engine. These signals can be used to e.g. start and stop item icon animations in Toolbox, see toolbox!41.

Version number was bumped to 0.3.0

Strategy to fully adopt the experimental engine

In GitLab by @manuelma on Nov 13, 2020, 12:20

I keep coming back to this, but I still think we need a timeline to forget about the regular engine. What do we need to do to trust the experimental engine and be happy with it? It's certainly a problem for developers having to maintain two sets of executable items, it slows down development a lot as you can imagine.

Thoughts? @PekkaSavolainen @ererkka @soininen @jkiviluo ?

Tests broken

Something about execute_unfilterted broke the tests, will fix them now. Sorry for inconveniences.

Replace engine Qt signals

In GitLab by @RichieBrady on Aug 27, 2020, 17:03

Currently the engine has two signals that the toolbox connects to. They are

    dag_node_execution_started = Signal(str, "QVariant")
    """Emitted just before a named DAG node execution starts."""

    dag_node_execution_finished = Signal(str, "QVariant", "QVariant")
    """Emitted after a named DAG node has finished execution."""

From the toolbox they are connected to from the spinetoolbox/project.py execute_dag method, below is a snippet from that method.

    self.engine = SpineEngine(items, node_successors, execution_permits)
    self.engine.dag_node_execution_finished.connect(self._handle_dag_node_execution_finished)
    self.dag_execution_about_to_start.emit(self.engine) # Not sure how to handle this 

This part is a little out of my depth (as it moves into ui territory)

    self.dag_execution_about_to_start.emit(self.engine)

a reference to the engine is passed to the ui to handle project item animations.

dag_execution_about_to_start is connected to from spinetoolbox/ui_main.py

     def _connect_project_signals(self):
        """Connects signals emitted by project."""
        self._project.dag_execution_about_to_start.connect(self.ui.graphicsView.connect_engine_signals)
        self._project.project_execution_about_to_start.connect(self._scroll_event_log_to_end)

From here

     self._project.dag_execution_about_to_start.connect(self.ui.graphicsView.connect_engine_signals)

the engine reference is passed to and its signals are connected to from spinetoolbox/widgets/custom_qgraphicsviews.py

     @Slot("QVariant")
     def connect_engine_signals(self, engine):
        """Connects signals needed for icon animations from given engine."""
        engine.dag_node_execution_started.connect(self._start_animation)
        engine.dag_node_execution_finished.connect(self._stop_animation)
        engine.dag_node_execution_finished.connect(self._run_leave_animation)

I think this might be the more difficult aspect of replacing the engine signals and was hoping for some insight/advice on how to proceed.

Parallelism not exploited in some looped workflows

Consider the following dummy workflow:

chunks

At the moment, spine engine will only start the looping part of the workflow after Tool 1 and Tool 2 are done executing. But clearly it could do it in parallel since there are no interdependencies. I think we need to be able to parallelize chunk executions if possible.

Logging capabilities for Spine Engine

In GitLab by @fabianoP on May 12, 2020, 09:59

The logging capabilities of the Spine Engine need to be upgraded, it is difficult to debug a not valid workflow. We should look into the dagster API to output the log.

Support for items that do resource forwarding [REPLACEMENT ISSUE]

The original issue

Id: 11
Title: Support for items that do resource forwarding

could not be created.
This is a dummy issue, replacing the original one. It contains everything but the original issue description. In case the gitlab repository is still existing, visit the following link to show the original issue:

TODO

Example of optimisation workflow [REPLACEMENT ISSUE]

The original issue

Id: 1
Title: Example of optimisation workflow

could not be created.
This is a dummy issue, replacing the original one. It contains everything but the original issue description. In case the gitlab repository is still existing, visit the following link to show the original issue:

TODO

Funny scenario combinations from two similar dbs feeding a tool

I have two DBs with the same scenarios feeding a tool, say DB1 and DB2 with scenarioA and scenarioB. When I run the tool, I get executions:

  1. scenarioA from DB1, scenarioA from DB2
  2. scenarioA from DB1, scenarioB from DB2
  3. scenarioB from DB1, scenarioA from DB2
  4. scenarioB from DB1, scenarioB from DB2

The question is, do we ever want to execute combinations 2 and 3 above? I have a use case where the answer is definitely no (I don't want to mix scenarioA and scenarioB), and I'd need to do some scripting at the beginning of my tool to exclude those unwanted combinations - but maybe we could have a rule at toolbox level that same scenario name from multiple DBs always go into the same execution?

Traceback when stopping a kernel execution

Almost systematically

Exception in thread Thread-20:
Traceback (most recent call last):
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.7/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/home/manuelma/Codes/spine/engine/spine_engine/spine_engine.py", line 390, in _execute_item_filtered
    item_success = item.execute(filtered_forward_resources, filtered_backward_resources)
  File "/home/manuelma/Codes/spine/items/spine_items/tool/executable_item.py", line 410, in execute
    return_code = self._tool_instance.execute()
  File "/home/manuelma/Codes/spine/items/spine_items/tool/tool_instance.py", line 157, in execute
    return self._console_execute()
  File "/home/manuelma/Codes/spine/items/spine_items/tool/tool_instance.py", line 167, in _console_execute
    ret = self.exec_mngr.run_until_complete()
  File "/home/manuelma/Codes/spine/engine/spine_engine/execution_managers.py", line 177, in run_until_complete
    returncode = self._do_run()
  File "/home/manuelma/Codes/spine/engine/spine_engine/execution_managers.py", line 183, in _do_run
    self._kernel_client.wait_for_ready(timeout=self._startup_timeout)
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.7/site-packages/jupyter_client/blocking/client.py", line 96, in wait_for_ready
    msg = self.shell_channel.get_msg(block=True, timeout=1)
  File "/home/manuelma/miniconda3/envs/spinetoolbox/lib/python3.7/site-packages/jupyter_client/blocking/channels.py", line 47, in get_msg
    ready = self.socket.poll(timeout)
  File "/home/manuelma/.local/lib/python3.7/site-packages/zmq/sugar/socket.py", line 702, in poll
    evts = dict(p.poll(timeout))
  File "/home/manuelma/.local/lib/python3.7/site-packages/zmq/sugar/poll.py", line 99, in poll
    return zmq_poll(self.sockets, timeout=timeout)
  File "zmq/backend/cython/_poll.pyx", line 143, in zmq.backend.cython._poll.zmq_poll
  File "zmq/backend/cython/_poll.pyx", line 123, in zmq.backend.cython._poll.zmq_poll
  File "zmq/backend/cython/checkrc.pxd", line 26, in zmq.backend.cython.checkrc._check_rc
zmq.error.ZMQError: Socket operation on non-socket

Drop 'node_successors' argument from SpineEngine.__init__()

The node_successors argument of SpineEngine.__init__() is superfluous because the same information is contained in the connections argument. We should drop node_successors as it makes calling __init__() more straightforward and removes a potential source of errors if node_successors and connections are inconsistent.

Setting resource limits

Discussed at Virtual 2021 meeting.

It should be possible to limit the number of cores (or hardware threads) Spine engine uses, perhaps even per DAG branch. Possibility to limit memory usage by utilizing user provided memory usage estimates should be investigated as well.

Remove dependency to spine_items

Engine currently depends on spine_items module even though it shouldn't. To drop the hard-coded name from Engine, we can pass the module's name to SpineEngine and load it dynamically using importlib.import_module. This enables Engine's unit tests to become independent of spine_items and also allows us to extend item loader to load different item modules in case user's have 3rd party plug-ins in the future.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.