cd docs
npm run docs:build
npm run docs:dev
A distributed process engine based on the BPMN 2.0 and FHIR R4 standards
Home Page: https://dsf.dev
License: Apache License 2.0
cd docs
npm run docs:build
npm run docs:dev
The DSF FHIR docker image should contain a default "external" bundle.xml
file to create the local Organization and Endpoint resources. Installation specific properties (identifier
and address
) of the Endpoint resource should be defined via the existing DSF FHIR server base url (property dev.dsf.fhir.server.base.url
) with address = url and identifier = fqdn from the url. The installation specific property identifier
of the Organization resource should be defined via property dev.dsf.fhir.server.organization.identifier
.
Jetty config properties should be align to existing dev.dsf...
properties.
Create an HTML view which allows starting new processes via Task resources with status draft
.
The context path of the FHIR proxy server is currently hardcoded to /fhir
. To render this customizable, only a few changes need to be performed:
APP_SERVER_CONTEXT_PATH
and default it to /fhir
in dsf-docker/fhir_proxy/Dockerfile
:
ENV PROXY_PASS_CONNECTION_TIMEOUT_WS=30
+ENV APP_SERVER_CONTEXT_PATH="/fhir"
/fhir
in dsf-docker/fhir_proxy/conf/extra/host-ssl.conf
with that environment variable:
-<Location "/fhir">
+<Location "${APP_SERVER_CONTEXT_PATH}">
RequestHeader set X-ClientCert %{SSL_CLIENT_CERT}s
- ProxyPass http://${APP_SERVER_IP}:8080/fhir timeout=${PROXY_PASS_TIMEOUT_HTTP} connectiontimeout=${PROXY_PASS_CONNECTION_TIMEOUT_HTTP}
- ProxyPassReverse http://${APP_SERVER_IP}:8080/fhir
+ ProxyPass http://${APP_SERVER_IP}:8080${APP_SERVER_CONTEXT_PATH} timeout=${PROXY_PASS_TIMEOUT_HTTP} connectiontimeout=${PROXY_PASS_CONNECTION_TIMEOUT_HTTP}
+ ProxyPassReverse http://${APP_SERVER_IP}:8080${APP_SERVER_CONTEXT_PATH}
</Location>
-<Location "/fhir/ws">
+<Location "${APP_SERVER_CONTEXT_PATH}/ws">
RequestHeader set X-ClientCert %{SSL_CLIENT_CERT}s
ProxyWebsocketFallbackToProxyHttp off
- ProxyPass ws://${APP_SERVER_IP}:8080/fhir/ws timeout=${PROXY_PASS_TIMEOUT_WS} connectiontimeout=${PROXY_PASS_CONNECTION_TIMEOUT_WS}
- ProxyPassReverse ws://${APP_SERVER_IP}:8080/fhir/ws
+ ProxyPass ws://${APP_SERVER_IP}:8080${APP_SERVER_CONTEXT_PATH}/ws timeout=${PROXY_PASS_TIMEOUT_WS} connectiontimeout=${PROXY_PASS_CONNECTION_TIMEOUT_WS}
+ ProxyPassReverse ws://${APP_SERVER_IP}:8080${APP_SERVER_CONTEXT_PATH}/ws
</Location>
docker-compose.yml
:DEV_DSF_SERVER_CONTEXT_PATH
already exists in dsf-fhir/dsf-fhir-server-jetty/docker/Dockerfile
)
services:
proxy:
image: ghcr.io/datasharingframework/fhir_proxy:1.1.0
...
environment:
...
+ APP_SERVER_CONTEXT_PATH: /dsf/fhir
...
app:
image: ghcr.io/datasharingframework/fhir:1.1.0
...
environment:
...
+ DEV_DSF_SERVER_CONTEXT_PATH: /dsf/fhir
...
The above has been implemented and successfully tested in our local environment already.
dsf-fhir/dsf-fhir-server/src/main/java/dev/dsf/fhir/adapter/HtmlFhirAdapter.java
in private String getUrlHeading(Resource resource)
which leads to a a double display of /fhir
in the shown title, e.g. https://diz.uks.eu/dsf/fhir/fhir/metadata
and - if I am not mistaken - there should be a html
tab left to the json
and xml
tabs. (see screenshot)
Dependencies between processes means that certain processes cannot be used independently of each other. This is no longer needed and can therefore be removed.
Simplify the DB migration scripts for FHIR and BPE with as few as possible migration steps.
In the future, we will deploy processes only using fat-jars. Therefore, the folder deployment option should be removed.
Current status:
Currently, the User-Agent header only contains information about the library used for HTTP requests.
Proposal:
I suggest that the User-Agent header be extended to include the name and version of the software (e.g. DSF BPE 1.0.0). It might even be possible to include process plugin and version for easier identification.
This would allow easier server-side debugging for external applications (e.g. an external server called in a process task).
It could also be used to return different data depending on the version number sent by the DSF bpe, but in this case the process plugin developer should proactively send the version number, e.g. in the request body or as a url parameter.
FHIR URLs: http://highmed.org/...
-> http://dsf.dev/...
Process URLs: http://highmed.org/...
-> http://dsf.dev/...
Java packages: org.highmed.dsf...
-> dev.dsf...
Add a common forward proxy server config with config parameters for: url, username, password and a no-proxy list. Config values should be accessible via the process plugin API.
The getOrganization(Identifier organizationIdentifier)
method searches for Endpoint
resources and thus can't find Organization
resources by identifier.
Multiple ActivityDefinition
resources should be allowed via the ActivityDefinitionAuthorizationRule
for the same parent/member organization combination in order to configure different endpoints for different roles.
Currently only one ActivityDefinition
resources is allowed for the same combination of parent and member organization. The new rules should allow ActivityDefinition
resources for a uniquely searchable combination of parent/member organization and role(s).
Start new development cycle 1.1.1
Start new development cycle.
The FHIR server role config dev.dsf.fhir.server.roleConfig
should be optional and the validation of YAML properties can be improved.
In order to use OIDC authentication client certificates need to be configured as optional in the reverse proxy. A new environment variable should be added to configure the SSLVerifyClient
option.
Cleanup TaskHelper methods and add JavaDoc.
We should link to the installation instructions for the DSF in the readme:
To reduce system load (as suggested by @EmteZogaf) we should use curl
as the FHIR/BPE healthcheck client instead of our java based "StatusClient".
In addition to the backend acting as an OIDC client authenticating users for the HTML frontend via Authorization Code Flow, we should add support for OAuth Bearer Token Authentication in order to support other clients interacting with the FHIR rest webservice directly.
The config property dev.dsf.fhir.server.base.url
of the FHIR server is usually not necessary and should be made optional. By setting Host
, X-Forwarded-For
, X-Forwarded-Host
, X-Forwarded-Proto
and X-Real-IP
headers in the reverse-proxy, the external URL is available to the backend server for example via UriInfo.getBaseUri()
.
Calling OrganizationProvider.getOrganizations(Identifier parentOrganizationIdentifier, Coding memberOrganizationRole) in a DSF instance where the store contains deleted Organizations whose OrganizationAffiliations are not deleted and still having the status active lead to a ResultBundle including these OrganizationAffiliations but not the deleted Organizations.
The ResultBundle total
number equals the amount of OrganizationAffiliations found by the FHIR search query and not the amount of included Organizations in the ResultBundle.
In line AbstractResourceProvider.java#L64 the total
number of the ResultBundle is compared to the already found included (with search mode included
) resources and if this number is lower, the next page gets loaded. As the following pages contain no further results these numbers never become equal and the page counter gets incremented infinitely.
A partial solution could be checking the ResultBundle for containing a next link to the next result page. If there is no next link, break out of the while loop.
Task.status = {requested}
, business-key mandatory if Task.status = {in-progress, completed, failed}
The DSF FHIR server UI should have a dark mode.
In Version 1.1.0 (FHIR and BPE) the environment variable DEV_DSF_PROXY_NOPROXY
does not separate values when using the literal block scalar |
without comma signs.
As the current github hosted runners are low on RAM and CPU, we need to provide self hosted runners
draft
Task resources to allow creation of Tasks even if the Task would not be allowed to be executed.In order to execute the Ping/Pong process with >10 targets the camunda DefaultJobExecutor
thread pool queue size needs to be increased. Since the ping process uses a multi-instance parallel sub-process with async before config, a job for every ping target is created.
The executor config options (core pool size, queue size and max pool size) should be exposed.
To understand Java ThreadPool config options: http://www.bigsoft.co.uk/blog/2009/11/27/rules-of-a-threadpoolexecutor-pool-size
The search parameter infrastructure should allow repeated search parameters to define "AND" queries. Currently only date search parameters can be defined once or twice.
The config parameter dev.dsf.bpe.fhir.server.organization.identifier.value
of the DSF BPE is currently required but never used. The parameter can be removed since the BPE contacts the DSF FHIR Server on Startup to download the local Organization FHIR resource. Thus, "calculating" the value based on the configured DSF FHIR Server base url.
Migrate to Java 17, upgrade libraries and tools where needed.
Remove the VM-based test-setup, because it hasn´t been updated and used for quite some time.
The plugin folder should be removed from default deployments, as we will deploy processes normally as fat-jars.
Additionally, the plugin folder has led to confusion as we are talking as well about process-plugins, meaning a different folder. Therefore we should rename the plugin folder as well, maybe to module
or lib_external
.
stopped
or failed
.Identified resources:
Using a NamingSystem in a ProcessPluginDefinition leads to the following error during process plugin deployment:
AbstractProcessPlugin.isValid(1122) | Ignoring FHIR resource fhir/NamingSystem/mii-project-identifier.xml from process plugin mii-process-data-transfer-1.0.0.0: NamingSystem.version empty
NamingSystems do not contain a version
element.
Since the thread handling the websocket client on the BPE Server is blocked during execution of non async process steps, the websocket Ping-Frame may not be answered in time resulting in a connection idle timeout. Task resources received after the idle timeout are currently never received by the bpe via the websocket connection.
The actual process engine process start or process correlation step needs to be handled by a different thread with a queue between the websocket client thread and the bpe start/correlatio thread.
Upgrade dependencies where possible/compatible.
Remove modules that are not considered as core modules of the DSF but are more process specific:
These modules could be released later as libraries, if need.
The implementation should allow for information about the OpenID Provider to be configured via a OpenID Provider Configuration Request or manually via environment variables since the DSF FHIR server may not be able to access the OpenID Provider directly.
\d+\.\d+
\d+\.\d+\.\d+\.\d+
Extending the DefaultUserTaskListener
does not provide access to the ProcessPluginApi
beforeQuestionnaireResponseCreate()
and afterQuestionnaireResponseCreate()
are missing the Variables
parameter
As defined by http://hl7.org/fhir/R4/search.html#summary
Add entries to the CodeSystems
class of the next process API version for the organization-role
and practitioner-role
CodeSystems.
Next version (for now): 1.1.0
Related to issue #90 we should not only enable execution of process starts and correlations on a separate special thread but use a pool of threads to enable parallel execution of non async process steps.
Currently, the organization roles do not perfectly align with the ones defined in MII official documents and are therefore substituted in MII processes by roles that do not fit perfectly.
Possible changes include adding "DMS" as a central organisational role in the MII and renaming "MeDIC" to "DIC" to align the wording with the official documents of the MII and so that all roles consist of three letters.
This leads to the following proposal for roles in DSF v1.0.0:
<concept>
<code value="COS"/>
<display value="Coordinating Site"/>
</concept>
<concept>
<code value="CRR"/>
<display value="Central Research Repository"/>
</concept>
<concept>
<code value="DIC"/>
<display value="Data Integration Center"/>
</concept>
<concept>
<code value="DMS"/>
<display value="Data Management Site"/>
</concept>
<concept>
<code value="DTS"/>
<display value="Data Transfer Site"/>
</concept>
<concept>
<code value="HRP"/>
<display value="Health Research Platform"/>
</concept>
<concept>
<code value="TTP"/>
<display value="Trusted Third Party"/>
</concept>
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.