See ELIXIR Cloud outline repository for details.
More info coming soon.
Proxy service for injecting middleware into GA4GH WES requests
License: Apache License 2.0
See ELIXIR Cloud outline repository for details.
More info coming soon.
Worker logging is currently not very neat, with several warnings happening across multiple lines and import packages all using their own styles of logging, making it difficult to follow them. See what can be done in improving logging such that it behaves similar to the Flask app logging.
This may be a FOCA issue.
Validating incoming workflow run requests against the currently supported worklow types/versions invoked a call to the GET /service-info
endpoint to get the latest info on supported types/versions. However, the way this is currently implemented bears the risk of circular imports.
In particular, it would be nice to get rid of the dedicated pro_wes.ga4gh.wes.states
module in favor of the State
enum in pro_wes.ga4gh.wes.models
, but this currently causes a circular import. Attempt to refactor such that no circular import arises.
As described in the WES API specification, workflow and task logs are not to be included in the run/task logs directly, but instead they should be hosted at separate locations. While proWES could link to those locations on the WES that actually processes the workflow run, data life cycles across different external WES instances will likely be different, leading to a potentially frustrating user experience.
Update used API specification from ga4gh/workflow-execution-service-schemas@4048014 to ga4gh/workflow-execution-service-schemas@13f5682, which is tagged 1.0.1
and constitutes the corresponding release.
Note that this is a silent change, as there are no actual content changes between the two commits.
The application reloads several times during startup. In the first load, Connexion fails to add operations for all all routes. In subsequent loads, operations are added just fine and the app starts up. Possible FOCA issue.
Example log for a single operation:
Failed to add operation for GET /ga4gh/wes/v1/runs [connexion.apis.abstract]
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/connexion/resolver.py", line 66, in resolve_function_from_operation_id
return self.function_resolver(operation_id)
File "/usr/local/lib/python3.7/site-packages/connexion/utils.py", line 125, in get_function_from_name
function = deep_getattr(module, attr_path)
File "/usr/local/lib/python3.7/site-packages/connexion/utils.py", line 73, in deep_getattr
return functools.reduce(getattr, attrs, obj)
AttributeError: module 'pro_wes.ga4gh.wes.controllers' has no attribute 'ListRuns'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/connexion/apis/abstract.py", line 222, in add_paths
self.add_operation(path, method)
File "/usr/local/lib/python3.7/site-packages/connexion/apis/abstract.py", line 186, in add_operation
pass_context_arg_name=self.pass_context_arg_name
File "/usr/local/lib/python3.7/site-packages/connexion/operations/__init__.py", line 16, in make_operation
return spec.operation_cls.from_spec(spec, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/connexion/operations/openapi.py", line 142, in from_spec
**kwargs
File "/usr/local/lib/python3.7/site-packages/connexion/operations/openapi.py", line 93, in __init__
pass_context_arg_name=pass_context_arg_name
File "/usr/local/lib/python3.7/site-packages/connexion/operations/abstract.py", line 101, in __init__
self._resolution = resolver.resolve(self)
File "/usr/local/lib/python3.7/site-packages/connexion/resolver.py", line 45, in resolve
return Resolution(self.resolve_function_from_operation_id(operation_id), operation_id)
File "/usr/local/lib/python3.7/site-packages/connexion/resolver.py", line 71, in resolve_function_from_operation_id
raise ResolverError(str(e), sys.exc_info())
connexion.exceptions.ResolverError: <ResolverError: module 'pro_wes.ga4gh.wes.controllers' has no attribute 'ListRuns'>
After deploying via docker-compose
, when manually deleting the mounted data directory (which includes a subdirectory for the MongoDB database), the application throws an error on first startup. When retrying startup, the application starts up as expected.
With #78, the service info requires to add [support for the generic GA4GH Service Info API Service
fields](https://github.com/ga4gh/workflow-execution-service-schemas/blob/13f56821fb50e4b01a269354421c96c20785e909/openapi/workflow_execution_service.openapi.yaml#L427-L477):
components:
schemas:
...
ServiceInfo:
title: ServiceInfo
allOf:
- $ref: 'https://raw.githubusercontent.com/ga4gh-discovery/ga4gh-service-info/v1.0.0/service-info.yaml#/components/schemas/Service'
- type: object
properties:
workflow_type_versions:
type: object
additionalProperties:
$ref: '#/components/schemas/WorkflowTypeVersion'
...
Currently, GET
and POST
requests to the /service-info
endpoint only account for the WES-specific fields.
Add support for both the generic Service Info API and the WES-specific fields.
Apart from the app loading problems described in #33 and #41, registering/preparing the Celery app currently does not seem to be done elegantly. It is unclear whether the Flask app needs to be fully initialized before connecting with Celery and, in fact, this may actually cause problems (potentially #34).
Refactor Celery registration such that only code is executed that is actually required by Celery. This may be better be implemented in FOCA (or both in FOCA and proWES).
Ensure that type hints are available for all function/method signatures and global/local variables, use mypy to correct typing problems and include type checking in the CI workflow.
Describe the bug
POST
/runs throws a 400
error.
Request:
curl \
-X POST \
"http://localhost:8090/ga4gh/wes/v1/runs" \
-H "accept: application/json" \
-H "Content-Type: multipart/form-data" \
-F "tags=" \
-F "workflow_attachment=" \
-F "workflow_engine_parameters=" \
-F 'workflow_params={"input":{"class":"File","path":"https://is.muni.cz/pics/design/r/znak_MU.png"}}' \
-F "workflow_type=CWL" \
-F "workflow_type_version=v1.0" \
-F "workflow_url=https://github.com/uniqueg/cwl-example-workflows/blob/master/hashsplitter-workflow.cwl"
Response:
{
"message": "No suitable workflow engine known.",
"code": "400"
}
Traceback:
[2024-02-04 17:49:09,277: ERROR ] {'message': 'No suitable workflow engine known.', 'code': '400'} [foca.errors.exceptions]
[2024-02-04 17:49:09,281: ERROR ] Traceback (most recent call last):\n File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 1823, in full_dispatch_request\n rv = self.dispatch_request()\n File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 1799, in dispatch_request\n return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)\n File "/usr/local/lib/python3.10/site-packages/connexion/decorators/decorator.py", line 68, in wrapper\n response = function(request)\n File "/usr/local/lib/python3.10/site-packages/connexion/decorators/uri_parsing.py", line 149, in wrapper\n response = function(request)\n File "/usr/local/lib/python3.10/site-packages/connexion/decorators/validation.py", line 196, in wrapper\n response = function(request)\n File "/usr/local/lib/python3.10/site-packages/connexion/decorators/parameter.py", line 120, in wrapper\n return function(**kwargs)\n File "/usr/local/lib/python3.10/site-packages/foca/utils/logging.py", line 61, in _wrapper\n response = fn(*args, **kwargs)\n File "/app/pro_wes/ga4gh/wes/controllers.py", line 64, in RunWorkflow\n response = workflow_run.run_workflow(\n File "/app/pro_wes/ga4gh/wes/workflow_runs.py", line 81, in run_workflow\n document.run_log.request = self._validate_run_request(\n File "/app/pro_wes/ga4gh/wes/workflow_runs.py", line 468, in _validate_run_request\n model_instance = RunRequest(**dict_atomic)\n File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__\n File "pydantic/main.py", line 1102, in pydantic.main.validate_model\n File "/app/pro_wes/ga4gh/wes/models.py", line 195, in workflow_type_and_version_supported\n raise NoSuitableEngine(\npro_wes.exceptions.NoSuitableEngine: 400 Bad Request: No suitable workflow engine known for workflow type 'CWL' and version 'v1.0'; supported workflow engines: {'additionalProp1': {'workflow_type_version': ['string']}, 'additionalProp2': {'workflow_type_version': ['string']}, 'additionalProp3': {'workflow_type_version': ['string']}} [foca.errors.exceptions]
I don't think this is due to the latest commits, I have checked it for commits from 25f0e59 fix: create storage dir on startup (#83
to as back as fe8271a build: upgrade FOCA (#64)
, which is weird because it did use to work sometimes back, maybe a foca
error.
To Reproduce
Steps to reproduce the behavior:
I ran POST request from swagger on 8090.
cwl-WES currently does not issue spec-compliant run logs (elixir-cloud-aai/cwl-WES/issues/128). Therefore, run log validation has currently been enabled. Once cwl-WES supports spec-compliant run logs, change the following relative to commit f4d7082:
pro_wes/client_wes.py:L218-228
pro_wes/config.yaml:L65-66
pro_wes/tasks/track_run_progress.py:L19&L93-95&L99-101&L141-143&L147-149
Storage directory is not available for Docker Compose-based deployments (/data
inside container) because the volume definition in the prowes
service was inadvertently removed from docker-compose.yaml
in fe8271a.
Describe the bug
On sending post request with fields like workflow_engine and workflow_engine_version, the post request malforms.
Note: This happened when I tried to make a post request /runs
with the above fields where they weren't filled or had values as
"".
Add support for passing TRS URIs for the workflow_url
field. Implement it such that files are pulled and then attached, in a spec-conforming way, to the workflow run requests sent to the remote services.
Note that this is a feature that is currently not explicitly supported by the WES specification.
Linked with #10
Replace the config handler using foca package.
Use Black to reformat the entire code base (setup.py
and packages pro_wes
and tests
) and add format check to CI workflow.
While all FOCA configuration is automatically validated within FOCA, the same is currently not true for any custom configuration. Either validate the configuration in proWES (via pydantic
models) or implement a mechanism that allows passing a model module to FOCA so that it can validate custom configuration as well (preferred).
Describe the bug
While looking at docker compose logs
I notice below error throughout.
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312203+00:00 [error] <0.1519.0> Error on AMQP connection <0.1519.0> (172.18.0.6:33766 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312203+00:00 [error] <0.1519.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312226+00:00 [error] <0.1440.0> Error on AMQP connection <0.1440.0> (172.18.0.6:33694 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312226+00:00 [error] <0.1440.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312253+00:00 [error] <0.1635.0> Error on AMQP connection <0.1635.0> (172.18.0.5:51722 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312253+00:00 [error] <0.1635.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312182+00:00 [error] <0.1643.0> Error on AMQP connection <0.1643.0> (172.18.0.5:51732 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312182+00:00 [error] <0.1643.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312238+00:00 [error] <0.1532.0> Error on AMQP connection <0.1532.0> (172.18.0.6:33792 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312238+00:00 [error] <0.1532.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312209+00:00 [error] <0.1472.0> Error on AMQP connection <0.1472.0> (172.18.0.6:33738 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312209+00:00 [error] <0.1472.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312128+00:00 [error] <0.1451.0> Error on AMQP connection <0.1451.0> (172.18.0.6:33706 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312128+00:00 [error] <0.1451.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312263+00:00 [error] <0.1475.0> Error on AMQP connection <0.1475.0> (172.18.0.6:33750 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312263+00:00 [error] <0.1475.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312308+00:00 [error] <0.1569.0> Error on AMQP connection <0.1569.0> (172.18.0.6:33822 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312308+00:00 [error] <0.1569.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312287+00:00 [error] <0.1469.0> Error on AMQP connection <0.1469.0> (172.18.0.6:33722 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312287+00:00 [error] <0.1469.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312311+00:00 [error] <0.1660.0> Error on AMQP connection <0.1660.0> (172.18.0.5:51740 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312311+00:00 [error] <0.1660.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312314+00:00 [error] <0.1523.0> Error on AMQP connection <0.1523.0> (172.18.0.6:33780 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312314+00:00 [error] <0.1523.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312308+00:00 [error] <0.1543.0> Error on AMQP connection <0.1543.0> (172.18.0.6:33796 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312308+00:00 [error] <0.1543.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312350+00:00 [error] <0.1488.0> Error on AMQP connection <0.1488.0> (172.18.0.6:33762 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312350+00:00 [error] <0.1488.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312295+00:00 [error] <0.1556.0> Error on AMQP connection <0.1556.0> (172.18.0.6:33806 -> 172.18.0.3:5672, vhost: '/', user: 'guest', state: running), channel 0:
prowes-rabbitmq-1 | 2024-01-10 11:09:53.312295+00:00 [error] <0.1556.0> operation none caused a connection exception connection_forced: "broker forced connection closure with reason 'shutdown'"
To Reproduce
Steps to reproduce the behavior:
Additional context
Below is my compose log.
dev.txt
Copy implementation from elixir-cloud-aai/proTES#160.
Describe the bug
The datatype mentioned in the docs is not the dataype being sent in the response, for example tags, workflow_params, workflow_engine_params all are meant to be object but are returned as strings.
Provide unit tests that ensure complete (coverage 100%) unit test coverage.
Can be split up into multiple issues, e.g., for each subpackage.
Update deployment instructions to reflect current status and potential changes in #39.
Based on the deployment documentation, it appears that the Helm chart still has some problems. In particular, it may only run on OpenShift. Fix deployment such that it works on both OpenShift and vanilla Kubernetes.
On startup, the app is currently initialized several (currently three) times. It is unclear why that is, or if it's really necessary. Moreovoer, upon the first load, there are currently some issues (#33).
This may be a FOCA issue.
Access control is currently being implemented in FOCA. Once implemented, add and configure access control such that:
Use Pylint to remove code lints from main code base (setup.py
and package pro_wes
, but not tests
) and include lint check in the CI workflow.
GitHub Actions were ignore temporarily to speed up development/refactoring. Fix tests and prevent merging if unsuccessful.
Ensure that proWES complies with WES release 1.1.0:
The Helm chart references the c03 Openshift cluster in CSC. This cluster will be decommissioned soon.
The URL could be changed to Rahti, the main OpenShift cluster in CSC.
See security vulnerability here: GHSA-w3h3-4rj7-4ph4
Attaching files to workflow run request is currently not possible, because they are removed from the request before they are actually processed. For performance reasons, files are ideally written to disk only outside of the loop that tries to find a unique, available run identifier. Before that no working directory will be available.
Check current state of implementation with respect to the work needed for integration between Krini (deployed at https://krini.rahtiapp.fi/) and available WES deployments.
Create issues for any subtasks and add to this project board.
Describe the bug
The next_page_token should return empty when the number of runs have exhausted. while fetching runs, I set the pageSize to 10, number of runs available were 7, I still got next_page_token which led me to another fetch call which returned empty list of runs.
To Reproduce
Steps to reproduce the behavior:
keep fetching runs till they exhaust.
Additional context
I encountered it while working on WES component where the Web Component was paginating to an empty list of runs. I have tried local dev
branch.
In the current implementation, failure to connect to a given remote WES could lead to failed workflow runs immediately. This is not great for production performance, because connections will frequently fail termporarily for numerous reasons.
Implement retry options for all remote connections and add corresponding configuration options.
Controller functionality was tested only with authentication disabled. Switch on authentication, choose a authentication-enabled WES instance and try if works as expected. If not, fix propagation of tokens.
Describe the bug
A fresh deployment of the service from the default branch (dev
) via docker-compose
currently fails.
The specific error message is this:
prowes_1_a6c151d880c7 | Traceback (most recent call last):
prowes_1_a6c151d880c7 | File "/app/pro_wes/app.py", line 22, in init_app
prowes_1_a6c151d880c7 | service_info = service_info.get_service_info()
prowes_1_a6c151d880c7 | File "/app/pro_wes/ga4gh/wes/service_info.py", line 62, in get_service_info
prowes_1_a6c151d880c7 | raise NotFound
prowes_1_a6c151d880c7 | werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
prowes_1_a6c151d880c7 |
prowes_1_a6c151d880c7 | During handling of the above exception, another exception occurred:
prowes_1_a6c151d880c7 |
prowes_1_a6c151d880c7 | Traceback (most recent call last):
prowes_1_a6c151d880c7 | File "/usr/local/lib/python3.7/site-packages/connexion/apis/abstract.py", line 222, in add_paths
prowes_1_a6c151d880c7 | self.add_operation(path, method)
prowes_1_a6c151d880c7 | File "/usr/local/lib/python3.7/site-packages/connexion/apis/abstract.py", line 186, in add_operation
prowes_1_a6c151d880c7 | pass_context_arg_name=self.pass_context_arg_name
prowes_1_a6c151d880c7 | File "/usr/local/lib/python3.7/site-packages/connexion/operations/__init__.py", line 16, in make_operation
prowes_1_a6c151d880c7 | return spec.operation_cls.from_spec(spec, *args, **kwargs)
prowes_1_a6c151d880c7 | File "/usr/local/lib/python3.7/site-packages/connexion/operations/openapi.py", line 142, in from_spec
prowes_1_a6c151d880c7 | **kwargs
prowes_1_a6c151d880c7 | File "/usr/local/lib/python3.7/site-packages/connexion/operations/openapi.py", line 93, in __init__
prowes_1_a6c151d880c7 | pass_context_arg_name=pass_context_arg_name
prowes_1_a6c151d880c7 | File "/usr/local/lib/python3.7/site-packages/connexion/operations/abstract.py", line 101, in __init__
prowes_1_a6c151d880c7 | self._resolution = resolver.resolve(self)
prowes_1_a6c151d880c7 | File "/usr/local/lib/python3.7/site-packages/connexion/resolver.py", line 45, in resolve
prowes_1_a6c151d880c7 | return Resolution(self.resolve_function_from_operation_id(operation_id), operation_id)
prowes_1_a6c151d880c7 | File "/usr/local/lib/python3.7/site-packages/connexion/resolver.py", line 66, in resolve_function_from_operation_id
prowes_1_a6c151d880c7 | return self.function_resolver(operation_id)
prowes_1_a6c151d880c7 | File "/usr/local/lib/python3.7/site-packages/connexion/utils.py", line 116, in get_function_from_name
prowes_1_a6c151d880c7 | module = importlib.import_module(module_name)
prowes_1_a6c151d880c7 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
prowes_1_a6c151d880c7 | return _bootstrap._gcd_import(name[level:], package, level)
prowes_1_a6c151d880c7 | File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
prowes_1_a6c151d880c7 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load
prowes_1_a6c151d880c7 | File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
prowes_1_a6c151d880c7 | File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
prowes_1_a6c151d880c7 | File "<frozen importlib._bootstrap_external>", line 728, in exec_module
prowes_1_a6c151d880c7 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
prowes_1_a6c151d880c7 | File "/app/pro_wes/ga4gh/wes/controllers.py", line 12, in <module>
prowes_1_a6c151d880c7 | from pro_wes.ga4gh.wes.workflow_runs import WorkflowRuns
prowes_1_a6c151d880c7 | File "/app/pro_wes/ga4gh/wes/workflow_runs.py", line 45, in <module>
prowes_1_a6c151d880c7 | from pro_wes.tasks.track_run_progress import task__track_run_progress
prowes_1_a6c151d880c7 | File "/app/pro_wes/tasks/track_run_progress.py", line 26, in <module>
prowes_1_a6c151d880c7 | from pro_wes.worker import celery
prowes_1_a6c151d880c7 | File "/app/pro_wes/worker.py", line 7, in <module>
prowes_1_a6c151d880c7 | flask_app = init_app().app
prowes_1_a6c151d880c7 | File "/app/pro_wes/app.py", line 25, in init_app
prowes_1_a6c151d880c7 | data=current_app.config.foca.custom['service_info']
prowes_1_a6c151d880c7 | TypeError: 'CustomConfig' object is not subscriptable
This may be the same issue reported in #32, and possibly related to #33.
However, simply "restarting" (as suggested in #32) does not work (anymore). Rather, what recovers the service is checking out the controllers
branch, restarting (docker-compose up -d --build
) from that branch, then checking out the default branch again, then restarting (same command as before) from that branch.
To Reproduce
Clone the repository:
git clone [email protected]:elixir-cloud-aai/proWES.git
cd proWES
Delete any pre-existing database (may require sudo
):
rm -rf ../data/pro_wes/
Start service:
docker-compose up -d --build
Swagger UI should be inaccesible at http://localhost:8090/ga4gh/wes/v1/ui/ and logs should show an error similar to the one above (plus errors as described in #33).
To recover:
git checkout controllers # may work with other branches as well
docker-compose up -d --build
git checkout dev
docker-compose up -d --build
The service should now be accessible and functional.
Expected behavior
The service should start up in a functional state.
Update used API specification from ga4gh/workflow-execution-service-schemas@13f5682 to ga4gh/workflow-execution-service-schemas@226f43b, which is tagged 1.1.0
and constitutes the corresponding release.
Implement a generic middleware class, then subclass it and implement middleware that matches incoming workflow run requests by their workflow type and version to WES instances that are able to process them.
Depends on #71.
Describe the bug
The /runs/:id/cancel
is not working as expected. It throws a CORS
error.
Response:
{
"message": "Could not reach remote WES service.",
"code": "500"
}
Traceback:
[2024-02-04 18:07:00,829: ERROR ] {'message': 'Could not reach remote WES service.', 'code': '500'} [foca.errors.exceptions]
[2024-02-04 18:07:00,835: ERROR ] Traceback (most recent call last):\n File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request\n self._validate_conn(conn)\n File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1092, in _validate_conn\n conn.connect()\n File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 642, in connect\n sock_and_verified = _ssl_wrap_socket_and_match_hostname(\n File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 783, in _ssl_wrap_socket_and_match_hostname\n ssl_sock = ssl_wrap_socket(\n File "/usr/local/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 469, in ssl_wrap_socket\n ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)\n File "/usr/local/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 513, in _ssl_wrap_socket_impl\n return ssl_context.wrap_socket(sock, server_hostname=server_hostname)\n File "/usr/local/lib/python3.10/ssl.py", line 513, in wrap_socket\n return self.sslsocket_class._create(\n File "/usr/local/lib/python3.10/ssl.py", line 1071, in _create\n self.do_handshake()\n File "/usr/local/lib/python3.10/ssl.py", line 1342, in do_handshake\n self._sslobj.do_handshake()\nssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)\n\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last):\n File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen\n response = self._make_request(\n File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request\n raise new_e\nurllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)\n\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\n File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send\n resp = conn.urlopen(\n File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen\n retries = retries.increment(\n File "/usr/local/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment\n raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='csc-wes-noauth.rahtiapp.fi', port=443): Max retries exceeded with url: /ga4gh/wes/v1/runs/VVXHO8/cancel (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')))\n\nDuring handling of the above exception, another exception occurred:\nTraceback (most recent call last):\n File "/app/pro_wes/ga4gh/wes/client_wes.py", line 276, in cancel_run\n response_unvalidated = self.session.post(url, **kwargs).json()\n File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 637, in post\n return self.request("POST", url, data=data, json=json, **kwargs)\n File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request\n resp = self.send(prep, **send_kwargs)\n File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send\n r = adapter.send(request, **kwargs)\n File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 517, in send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='csc-wes-noauth.rahtiapp.fi', port=443): Max retries exceeded with url: /ga4gh/wes/v1/runs/VVXHO8/cancel (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')))\n\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\n File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 1823, in full_dispatch_request\n rv = self.dispatch_request()\n File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 1799, in dispatch_request\n return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)\n File "/usr/local/lib/python3.10/site-packages/connexion/decorators/decorator.py", line 68, in wrapper\n response = function(request)\n File "/usr/local/lib/python3.10/site-packages/connexion/decorators/uri_parsing.py", line 149, in wrapper\n response = function(request)\n File "/usr/local/lib/python3.10/site-packages/connexion/decorators/validation.py", line 399, in wrapper\n return function(request)\n File "/usr/local/lib/python3.10/site-packages/connexion/decorators/parameter.py", line 120, in wrapper\n return function(**kwargs)\n File "/usr/local/lib/python3.10/site-packages/foca/utils/logging.py", line 61, in _wrapper\n response = fn(*args, **kwargs)\n File "/app/pro_wes/ga4gh/wes/controllers.py", line 164, in CancelRun\n response = workflow_run.cancel_run(run_id=run_id, **kwargs)\n File "/app/pro_wes/ga4gh/wes/workflow_runs.py", line 383, in cancel_run\n wes_client.cancel_run(run_id=document["wes_endpoint"]["run_id"])\n File "/app/pro_wes/ga4gh/wes/client_wes.py", line 278, in cancel_run\n raise EngineUnavailable("external workflow engine unavailable") from exc\npro_wes.exceptions.EngineUnavailable: 500 Internal Server Error: external workflow engine unavailable [foca.errors.exceptions]
To Reproduce
/runs
endpoint.A lot of the code shared between this service and others in the organizationhas been moved to the FOCA archetype.
Duplicate code should be removed from this service and, where necessary, the remaining code should be refactored to make use of FOCA.
Add support for passing URLs to GitHub/GitLab/BitBucket repos to the workflow_url
field. Implement it such that repos are cloned and files are then attached, in a spec-conforming way, to the workflow run requests sent to the remote services.
Note that this is a feature that is currently not explicitly supported by the WES specification.
When connecting to MongoDB from the worker, the following warning is issued, despite the client being freshly created (apparently):
"MongoClient opened before fork. Create MongoClient only "
/usr/local/lib/python3.7/site-packages/pymongo/topology.py:155: UserWarning: MongoClient opened before fork. Create MongoClient only after forking. See PyMongo's documentation for details: http://api.mongodb.org/python/current/faq.html#is-pymongo-fork-safe
This may likely be better implemented in FOCA.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.