Giter VIP home page Giter VIP logo

camunda-8-js-sdk's Introduction

Camunda 8 JavaScript SDK

NPM

License

This is the official Camunda 8 JavaScript SDK. It is written in TypeScript and runs on Node.js. See why this does not run in a web browser.

Full API Docs are here. See the QUICKSTART.md file in the repository for a quick start.

What does "supported" mean?

This is the official supported-by-Camunda Nodejs SDK for Camunda Platform 8.

The Node.js SDK will not always support all features of Camunda Platform 8 immediately upon their release. Complete API coverage for a platform release will lag behind the platform release.

Prioritisation of implementing features is influenced by customer demand.

Semantic versioning

The SDK package tracks Camunda Platform 8 minor versioning. Feature releases to support current Platform minor version features result in a patch release of the SDK.

Using the SDK in your project

Install the SDK as a dependency:

npm i @camunda8/sdk

Usage

In this release, the functionality of Camunda 8 is exposed via dedicated clients for the component APIs.

import { Camunda8 } from '@camunda8/sdk'

const c8 = new Camunda8()
const zeebe = c8.getZeebeGrpcApiClient()
const zeebeRest = c8.getZeebeRestClient()
const operate = c8.getOperateApiClient()
const optimize = c8.getOptimizeApiClient()
const tasklist = c8.getTasklistApiClient()
const modeler = c8.getModelerApiClient()
const admin = c8.getAdminApiClient()

Configuration

The configuration for the SDK can be done by any combination of environment variables and explicit configuration passed to the Camunda8 constructor.

Any configuration passed in to the Camunda8 constructor is merged over any configuration in the environment.

The configuration object fields and the environment variables have exactly the same names. See the file src/lib/Configuration.ts for a complete configuration outline.

A note on how int64 is handled in the JavaScript SDK

Entity keys in Camunda 8 are stored and represented as int64 numbers. The range of int64 extends to numbers that cannot be represented by the JavaScript number type. To deal with this, int64 keys are serialised by the SDK to the JavaScript string type. See this issue for more details.

Some number values - for example: "total returned results " - may be specified as int64 in the API specifications. Although these numbers will usually not contain unsafe values, they are always serialised to string.

For int64 values whose type is not known ahead of time, such as job variables, you can pass an annotated data transfer object (DTO) to decode them reliably. If no DTO is specified, the default behavior of the SDK is to serialise all numbers to JavaScript number, and if a number value is detected at a runtime that cannot be accurately stored as number, to throw an exception.

Authorization

Calls to APIs can be authorized using basic auth or via OAuth - a token that is obtained via a client id/secret pair exchange.

Disable Auth

To disable OAuth, set the environment variable CAMUNDA_OAUTH_STRATEGY=NONE. You can use this when running against a minimal Zeebe broker in a development environment, for example.

Basic Auth

To use basic auth, set the following values either via the environment or explicitly in code via the constructor:

CAMUNDA_AUTH_STRATEGY=BASIC
CAMUNDA_BASIC_AUTH_USERNAME=....
CAMUNDA_BASIC_AUTH_PASSWORD=...

OAuth

If your platform is secured with OAuth token exchange (Camunda SaaS or Self-Managed with Identity), provide the following configuration fields at a minimum, either via the Camunda8 constructor or in environment variables:

CAMUNDA_AUTH_STRATEGY=OAUTH
ZEEBE_GRPC_ADDRESS=...
ZEEBE_CLIENT_ID=...
ZEEBE_CLIENT_SECRET=...
CAMUNDA_OAUTH_URL=...

To get a token for the Camunda SaaS Administration API or the Camunda SaaS Modeler API, set the following:

CAMUNDA_AUTH_STRATEGY=OAUTH
CAMUNDA_CONSOLE_CLIENT_ID=...
CAMUNDA_CONSOLE_CLIENT_SECRET=...

Token caching

OAuth tokens are cached in-memory and on-disk. The disk cache is useful to prevent token endpoint saturation when restarting or rolling over workers, for example. They can all hit the cache instead of requesting new tokens.

You can turn off the disk caching by setting CAMUNDA_TOKEN_DISK_CACHE_DISABLE to true. This will cache tokens in-memory only.

By default, the token cache directory is $HOME/.camunda. You can specify a different directory by providing a full file path value for CAMUNDA_TOKEN_CACHE_DIR.

Here is an example of specifying a different cache directory via the constructor:

import { Camunda8 } from '@camunda8/sdk'

const c8 = new Camunda8({
	CAMUNDA_TOKEN_CACHE_DIR: '/tmp/cache',
})

If the cache directory does not exist, the SDK will attempt to create it (recursively). If the SDK is unable to create it, or the directory exists but is not writeable by your application, the SDK will throw an exception.

Connection configuration examples

Self-Managed

This is the complete environment configuration needed to run against the Dockerised Self-Managed stack in the docker subdirectory:

# Self-Managed
export ZEEBE_GRPC_ADDRESS='localhost:26500'
export ZEEBE_REST_ADDRESS='http://localhost:8080'
export ZEEBE_CLIENT_ID='zeebe'
export ZEEBE_CLIENT_SECRET='zecret'
export CAMUNDA_OAUTH_STRATEGY='OAUTH'
export CAMUNDA_OAUTH_URL='http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token'
export CAMUNDA_TASKLIST_BASE_URL='http://localhost:8082'
export CAMUNDA_OPERATE_BASE_URL='http://localhost:8081'
export CAMUNDA_OPTIMIZE_BASE_URL='http://localhost:8083'
export CAMUNDA_MODELER_BASE_URL='http://localhost:8070/api'

# Turn off the tenant ID, which may have been set by multi-tenant tests
# You can set this in a constructor config, or in the environment if running multi-tenant
export CAMUNDA_TENANT_ID=''

# TLS for gRPC is on by default. If the Zeebe broker is not secured by TLS, turn it off
export CAMUNDA_SECURE_CONNECTION=false

If you are using an OIDC that requires a scope parameter to be passed with the token request, set the following variable:

CAMUNDA_TOKEN_SCOPE

Here is an example of doing this via the constructor, rather than via the environment:

import { Camunda8 } from '@camunda8/sdk'

const c8 = new Camunda8({
	ZEEBE_GRPC_ADDRESS: 'localhost:26500',
	ZEEBE_REST_ADDRESS: 'http://localhost:8080',
	ZEEBE_CLIENT_ID: 'zeebe',
	ZEEBE_CLIENT_SECRET: 'zecret',
	CAMUNDA_OAUTH_STRATEGY: 'OAUTH',
	CAMUNDA_OAUTH_URL:
		'http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token',
	CAMUNDA_TASKLIST_BASE_URL: 'http://localhost:8082',
	CAMUNDA_OPERATE_BASE_URL: 'http://localhost:8081',
	CAMUNDA_OPTIMIZE_BASE_URL: 'http://localhost:8083',
	CAMUNDA_MODELER_BASE_URL: 'http://localhost:8070/api',
	CAMUNDA_TENANT_ID: '', // We can override values in the env by passing an empty string value
	CAMUNDA_SECURE_CONNECTION: false,
})

Camunda SaaS

Here is a complete configuration example for connection to Camunda SaaS:

export ZEEBE_GRPC_ADDRESS='5c34c0a7-7f29-4424-8414-125615f7a9b9.syd-1.zeebe.camunda.io:443'
export ZEEBE_REST_ADDRESS='https://syd-1.zeebe.camunda.io/5c34c0a7-7f29-4424-8414-125615f7a9b9'
export ZEEBE_CLIENT_ID='yvvURO9TmBnP3zx4Xd8Ho6apgeiZTjn6'
export ZEEBE_CLIENT_SECRET='iJJu-SHgUtuJTTAMnMLdcb8WGF8s2mHfXhXutEwe8eSbLXn98vUpoxtuLk5uG0en'
# export CAMUNDA_CREDENTIALS_SCOPES='Zeebe,Tasklist,Operate,Optimize' # What APIs these client creds are authorised for
export CAMUNDA_TASKLIST_BASE_URL='https://syd-1.tasklist.camunda.io/5c34c0a7-7f29-4424-8414-125615f7a9b9'
export CAMUNDA_OPTIMIZE_BASE_URL='https://syd-1.optimize.camunda.io/5c34c0a7-7f29-4424-8414-125615f7a9b9'
export CAMUNDA_OPERATE_BASE_URL='https://syd-1.operate.camunda.io/5c34c0a7-7f29-4424-8414-125615f7a9b9'
export CAMUNDA_OAUTH_URL='https://login.cloud.camunda.io/oauth/token'
export CAMUNDA_AUTH_STRATEGY='OAUTH'

# This is on by default, but we include it in case it got turned off for local tests
export CAMUNDA_SECURE_CONNECTION=true

# Admin Console and Modeler API Client
export CAMUNDA_CONSOLE_CLIENT_ID='e-JdgKfJy9hHSXzi'
export CAMUNDA_CONSOLE_CLIENT_SECRET='DT8Pe-ANC6e3Je_ptLyzZvBNS0aFwaIV'
export CAMUNDA_CONSOLE_BASE_URL='https://api.cloud.camunda.io'
export CAMUNDA_CONSOLE_OAUTH_AUDIENCE='api.cloud.camunda.io'

Debugging

The SDK uses the debug library. To enable debugging output, set a value for the DEBUG environment variable. The value is a comma-separated list of debugging namespaces. The SDK has the following namespaces:

Value Component
camunda:adminconsole Administration API
camunda:modeler Modeler API
camunda:operate Operate API
camunda:optimize Optimize API
camunda:tasklist Tasklist API
camunda:oauth OAuth Token Exchange
camunda:grpc Zeebe gRPC channel
camunda:worker Zeebe Worker
camunda:zeebeclient Zeebe Client

Typing of Zeebe worker variables

The variable payload in a Zeebe worker task handler is available as an object job.variables. By default, this is of type any.

The ZBClient.createWorker() method accepts an inputVariableDto to control the parsing of number values and provide design-time type information. Passing an inputVariableDto class to a Zeebe worker is optional. If a DTO class is passed to the Zeebe worker, it is used for two purposes:

  • To provide design-time type information on the job.variables object.
  • To specify the parsing of JSON number fields. These can potentially represent int64 values that cannot be represented accurately by the JavaScript number type. With a DTO, you can specify that a specific JSON number fields be parsed losslessly to a string or BigInt.

With no DTO specified, there is no design-time type safety. At run-time, all JSON numbers are converted to the JavaScript number type. If a variable field has a number value that cannot be safely represented using the JavaScript number type (a value greater than 2^53 -1), an exception is thrown.

To provide a DTO, extend the LosslessDto class like so:

class MyVariableDto extends LosslessDto {
	name!: string
	maybeAge?: number
	@Int64String
	veryBigNumber!: string
	@BigIntValue
	veryBigInteger!: bigint
}

In this case, veryBigNumber is an int64 value. It is transferred as a JSON number on the wire, but the parser will parse it into a string so that no loss of precision occurs. Similarly, veryBigInteger is a very large integer value. In this case, we direct the parser to parse this variable field as a bigint.

You can nest DTOs like this:

class MyLargerDto extends LosslessDto {
	id!: string
	@ChildDto(MyVariableDto)
	entry!: MyVariableDto
}

Typing of custom headers

The Zeebe worker receives custom headers as job.customHeaders. The ZBClient.createWorker() method accepts a customHeadersDto to control the behavior of custom header parsing of number values and provide design-time type information.

This follows the same strategy as the job variables, as previously described.

Zeebe User Tasks

From 8.5, you can use Zeebe user tasks. See the documentation on how to migrate to Zeebe user tasks.

The SDK supports the Zeebe REST API. Be sure to set the ZEEBE_REST_ADDRESS either via environment variable or configuration field.

camunda-8-js-sdk's People

Contributors

jwulf avatar semantic-release-bot avatar pepopowitz avatar renovate[bot] avatar hassan-alnator avatar marinator86 avatar christinaausley avatar lsidoree avatar

Stargazers

Steve Robertson avatar Amara Graham avatar GOGOROUP OFFICIAL avatar Rahib Nazir Butt avatar Tuan Nguyen avatar  avatar Magnus Jurdal avatar

Watchers

 avatar Amara Graham avatar Niall avatar  avatar Magnus Jurdal avatar

camunda-8-js-sdk's Issues

npm i vs npm ci in GitHub Workflows

npm ci respects the package-lock.json file, resulting in deterministic installs.

However, users will install the package using npm i, which does not.

So, running npm ci in automated tests is not going to replicate the user's experience.

Add Multitenancy for SignalBroadcast

The Node.js client provides a BroadcastSignalCommand for broadcasting tenant-aware signals in Zeebe. These commands should support multi-tenancy by exposing an optional tenantId property/method.

The following error codes may be returned:

PERMISSION_DENIED (code: 7)
when a user attempts to broadcast a signal of a tenant they are not authorized for, when multi-tenancy is enabled.

INVALID_ARGUMENT (code: 3)
For a provided tenant id, when multi-tenancy is disabled
For a missing tenant id, when multi-tenancy is enabled
For an invalid tenant id (i.e. doesn't match the pre-defined format), when multi-tenancy is enabled.

Java client issue: camunda/camunda#13558

Configure documentation compilation in CI

The SDK API docs task needs to run in CI and automatically update the docs.

We'll run it on main, and tag the methods to indicate what release they were implemented in.

OAuth token refresh has a race condition

Tests in GitHub CI for Self-Managed are failing.

The cause seems to be a race condition in token refresh. At the moment, the OAuth component caches the token and compares the expiry time to the current time, and requests a new token if the expiry time is greater than or equal to the current time.

If the token expires in 1ms it will be used for a call, but this will probably result in it expiring before it hits the service.

To deal with this, I am adding a new configuration field: CAMUNDA_OAUTH_TOKEN_REFRESH_THRESHOLD_MS. It defaults to 1000 (1 second).

This represents the lead time to refresh the token. So, by default a cached token will be refreshed 1 second before it expires, and this can be tuned by the user depending on their environment.

Run tests on transpiled code and test type surface

At the moment all tests are running on the TypeScript source. The package entry point and any misconfigurations in the transpilation are not tested. This means that a package can be released via automation that doesn't work (See #114).

To make sure that this doesn't happen, we need a test that transpiles the code and then runs a smoke test on it using the package entry point.

Zeebe Worker payload parsing needs to be lossless

Similar to #80, the Zeebe worker variable payload may contain int64 number values.

Without a user-provided Dto class to leverage #78, we can't know the variable type - but we can detect an unsafe number value if we parse with lossless-json.

An option could be to specify via configuration how variables with a number value that is unsafe are handled.

If there is no Dto provided, we could default to throwing during the parse operation. This stops the application from operating with bad data.

And we could allow the user to set a default behaviour, "coerce to BigInt" or "coerce to string".

Support Zeebe StreamActivatedJobs API

SDK Component

Zeebe

The Zeebe Gateway has a new RPC StreamJobs to open a long-lived stream that essentially "pushes" jobs to the worker.

This obviates polling loops.

camunda/camunda#14152

Environment variable to enable it:

ZEEBE_CLIENT_WORKER_STREAMENABLED

See also the documentation about the general concept, including how back pressure works and can be implemented for your custom client: https://stage.docs.camunda.io/docs/next/components/concepts/job-workers/#job-streaming

Support custom scope for tokens

SDK Component

OAuth component, and Zeebe client (if it doesn't use the OAuth component)

Expected Behavior

Has the ability to specify the token scope via env var and constructor to support custom identity providers. See: camunda/camunda-modeler#4102

Current Behavior

OAuth hardcodes the scopes per component.

Intermittent token authorisation failure

Integration tests are still intermittently failing on authorisation.

Periodically a unit test will fail with 401: UNAUTHORISED.

If I "re-run failed jobs" is will reliably pass on the second run.

My hypothesis is that the API client involved in the test is attempting to make its calls with an expired token.

Token expiry is handled in the SDK in the OAuth component. This component encapsulates retrieving tokens from the token endpoint, caching them in memory and on disk, and providing a token to an API client when the API client wants to make an API call.

The OAuth client should be checking if it has a token in-memory or on-disk (the on-disk caching is for when applications are restarted), then checking if the token is expired or is likely to expire soon (there is a threshold setting that represents "this might expire before the call makes the roundtrip") and either requesting a new token from the endpoint to pass on or passing on the cached token.

Some tests - notably the Tasklist ones - will fail multiple calls when they do fail.

Otherwise, I've noticed that it happens later in the test suite, which leads me to think that it happens when a 300 second validity token expires and there is some race condition or logic error that means it is not correctly refreshed before being passed to the API client.

This is difficult to reproduce reliably.

Maybe it needs some specific test of the token refresh timing logic? Or maybe there is something obvious in the code that I am missing.

Move repository to Camunda GitHub org

The repository needs to be moved to the Camunda GitHub organisation, and all the references and links need to be updated.

  • Move repo to Camunda org
  • Make sure I have all necessary permissions on the new repo
  • Update links to API docs
  • Update package.json
  • Update README.md

Jest global setup breaks unit testing

Unit tests can no longer be run without an integration environment. The global setup and teardown should have code in them to skip the Operate and Zeebe client creation if they are not running in an integration test.

Configure GitHub bot for automated releases and protected branches

The workflow is like this:

All development work should be done via PRs against the alpha branch.

When a PR is merged to alpha, the tests are run, then semantic-release runs to determine if a new release is required. If it is, then an alpha package is published to NPM.

Production releases are accomplished by opening a PR from alpha to main.

When a PR is merged into main, semantic-release runs and if a new package release is required, a package is published to NPM.

Soup-to-nuts test

Create a complete integration test scenario using all the things in a coordinated way.

Intermittent error on multi-tenancy: "Error: 16 UNAUTHENTICATED: Expected Identity to provide authorized tenants, see cause for details"

Expected Behavior

No error should pop up

Current Behavior

A cluster is created, multi-tenancy enabled. 2 tenants are created, blue and green.

A script is created:

  • deploying the same simple process on both tenants. This process contains a service task
  • Starting a worker, with a filter on each tenant (so two workers are started)
  • Every 5 seconds, a new process instance is created on the green tenant
  • every 20 seconds, a new process instance is created on the red tenant

Process instances are created, and workers execute jobs, but from time to time, an Error: 16 UNAUTHENTICATED: is visible in the log. Even so
, all service tasks are completed.

Possible Solution

No solution

Steps to Reproduce

Create a cluster, with two tenants red and green
create a client ID / Client Secret
Run the script

Context (Environment)

Multi-tenancy usage

Detailed Description

Log visible:

 ##### Started/Job RED: 0/0, GREEN: 1/2
------  Green process instance started: [2251799813686010]
>>>> Execute GreenJob PI=[2251799813686010] tenant=[green]
15:42:07.412 | zeebe |  [service-task] ERROR: Grpc Stream Error: 16 UNAUTHENTICATED: Expected Identity to provide authorized tenants, see cause for details


 ##### Started/Job RED: 0/0, GREEN: 2/3
------  Green process instance started: [4503599627371180]
>>>> Execute GreenJob PI=[4503599627371180] tenant=[green]
15:42:12.524 | zeebe |  [service-task] ERROR: Grpc Stream Error: 16 UNAUTHENTICATED: Expected Identity to provide authorized tenants, see cause for details


 ##### Started/Job RED: 0/0, GREEN: 3/4
(node:226) UnhandledPromiseRejectionWarning: Error: 16 UNAUTHENTICATED: Expected Identity to provide authorized tenants, see cause for details
    at callErrorFromStatus (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/@grpc/grpc-js/build/src/call.js:31:19)
    at Object.onReceiveStatus (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/@grpc/grpc-js/build/src/client.js:192:76)
    at listener.onReceiveStatus.processedStatus (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/@grpc/grpc-js/build/src/call-interface.js:78:35)
    at Object.onReceiveStatus (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/zeebe-node/dist/lib/GrpcClient.js:97:36)
    at InterceptingListenerImpl.onReceiveStatus (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/@grpc/grpc-js/build/src/call-interface.js:73:23)
    at Object.onReceiveStatus (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:360:141)
    at Object.onReceiveStatus (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:323:181)
    at process.nextTick (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/@grpc/grpc-js/build/src/resolving-call.js:99:78)
    at process._tickCallback (internal/process/next_tick.js:61:11)
for call at
    at ServiceClientImpl.makeUnaryRequest (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/@grpc/grpc-js/build/src/client.js:160:32)
    at ServiceClientImpl.<anonymous> (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/@grpc/grpc-js/build/src/make-client.js:105:19)
    at Promise (/mnt/d/pym/CamundaDrive/Support & Consulting/S-19554 Avertra NodeJS/avertra-test/node_modules/zeebe-node/dist/lib/GrpcClient.js:272:47)
    at process._tickCallback (internal/process/next_tick.js:68:7)
(node:226) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:226) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
--

Possible Implementation

red-green.txt
simple-bpmn.txt

Simplify the configuration of ZBClient

The ZBClient component supports a lot of backward-compatible constructor options, including multiple constructor signatures.

This makes modification and maintenance a challenge. While adding multi-tenancy support to the Zeebe client, this is causing a lot of friction and accidental complexity.

For the 8.5 release, I will simplify the constructor signature to be a Partial<ZBClientConfig>, and then make the configuration hydrator do the following:

Read all environment variables into a config map, then overwrite that map with any explicit configuration passed to the constructor.

This makes the configuration explicit and simple. The zero-conf constructor is default. All configuration from environment variables is applied, then anything that is explicitly passed to a ZBClient constructor overrides it.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Rate-Limited

These updates are currently rate-limited. Click on a checkbox below to force their creation now.

  • fix(deps): update dependency @grpc/proto-loader to v0.7.12
  • fix(deps): update dependency reflect-metadata to v0.2.2
  • chore(deps): update dependency typescript to v5.4.3
  • chore(deps): update typescript-eslint monorepo to v6.21.0 (@typescript-eslint/eslint-plugin, @typescript-eslint/parser)
  • fix(deps): update dependency @grpc/grpc-js to v1.10.6
  • fix(deps): update dependency neon-env to ^0.2.0
  • chore(deps): update actions/checkout action to v4
  • chore(deps): update actions/setup-node action to v4
  • chore(deps): update commitlint monorepo to v19 (major) (@commitlint/cli, @commitlint/config-conventional)
  • chore(deps): update dependency @types/debug to v4
  • chore(deps): update dependency @types/node to v20
  • chore(deps): update dependency delay to v6
  • chore(deps): update dependency husky to v9
  • chore(deps): update dependency jest to v29
  • chore(deps): update dependency semantic-release to v23
  • chore(deps): update docker.elastic.co/elasticsearch/elasticsearch docker tag to v8
  • chore(deps): update typescript-eslint monorepo to v7 (major) (@typescript-eslint/eslint-plugin, @typescript-eslint/parser)
  • fix(deps): update dependency chalk to v5
  • fix(deps): update dependency got to v14
  • fix(deps): update dependency long to v5
  • fix(deps): update dependency node-fetch to v3
  • fix(deps): update dependency promise-retry to v2
  • fix(deps): update dependency typed-duration to v2
  • fix(deps): update dependency uuid to v9 (uuid, @types/uuid)
  • ๐Ÿ” Create all rate-limited PRs at once ๐Ÿ”

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

docker-compose
docker/docker-compose-modeler.yaml
docker/docker-compose-multitenancy.yml
docker/docker-compose.yml
zeebe-extra/docker/docker-compose.yml
  • camunda/zeebe 8.3.3
github-actions
.github/workflows/commitlint.yml
  • actions/checkout v4
  • actions/setup-node v4
.github/workflows/multitenancy.yml
  • actions/checkout v4
  • actions/setup-node v4
.github/workflows/saas.yml
  • actions/checkout v4
  • actions/setup-node v4
.github/workflows/singletenant.yml
  • actions/checkout v3
  • actions/setup-node v3
.github/workflows/tag-and-publish.yml
  • actions/checkout v3
  • actions/setup-node v3
  • JamesIves/github-pages-deploy-action v4
  • docker.elastic.co/elasticsearch/elasticsearch 7.17.5
  • camunda/zeebe 8.4.5
zeebe-extra/.github/workflows/build-docs.yml
  • actions/checkout v3
  • actions/setup-node v3
  • JamesIves/github-pages-deploy-action v4
  • docker.elastic.co/elasticsearch/elasticsearch 7.17.5
  • camunda/zeebe 8.3.3
zeebe-extra/.github/workflows/test-camunda-saas-push.yml
  • actions/checkout v1
zeebe-extra/.github/workflows/test.yml
  • camunda/zeebe 8.3.3
npm
package.json
  • @grpc/grpc-js 1.9.7
  • @grpc/proto-loader 0.7.10
  • chalk ^2.4.2
  • console-stamp ^3.0.2
  • dayjs ^1.8.15
  • debug ^4.3.4
  • fast-xml-parser ^4.1.3
  • got ^11.8.6
  • lodash.mergewith ^4.6.2
  • long ^4.0.0
  • lossless-json ^4.0.1
  • neon-env ^0.1.3
  • node-fetch ^2.7.0
  • promise-retry ^1.1.1
  • reflect-metadata ^0.2.1
  • stack-trace 0.0.10
  • typed-duration ^1.0.12
  • uuid ^7.0.3
  • @commitlint/cli ^18.4.3
  • @commitlint/config-conventional ^18.4.3
  • @semantic-release/changelog ^6.0.3
  • @semantic-release/git ^10.0.1
  • @sitapati/testcontainers ^2.8.1
  • @types/debug ^4.1.12
  • @types/jest ^29.5.11
  • @types/lodash.mergewith ^4.6.9
  • @types/node ^20.9.4
  • @types/node-fetch ^2.6.11
  • @types/promise-retry ^1.1.6
  • @types/uuid ^9.0.8
  • @typescript-eslint/eslint-plugin ^6.14.0
  • @typescript-eslint/parser ^6.14.0
  • commitizen ^4.3.0
  • cz-conventional-changelog ^3.3.0
  • eslint ^8.55.0
  • eslint-config-prettier ^9.1.0
  • eslint-plugin-import ^2.29.1
  • eslint-plugin-prettier ^5.0.1
  • husky ^8.0.3
  • jest ^29.7.0
  • lint-staged ^15.2.0
  • prettier ^3.1.1
  • semantic-release ^22.0.12
  • ts-jest ^29.1.1
  • tsconfig-paths ^4.2.0
  • typedoc ^0.25.9
  • typedoc-plugin-include-example ^1.2.0
  • typedoc-plugin-missing-exports ^2.2.0
  • typescript ^5.3.3
zeebe-extra/package.json
  • @grpc/grpc-js 1.9.7
  • @grpc/proto-loader 0.7.10
  • chalk ^2.4.2
  • console-stamp ^3.0.2
  • dayjs ^1.8.15
  • debug ^4.2.0
  • fast-xml-parser ^4.1.3
  • fp-ts ^2.5.1
  • got ^11.8.5
  • long ^4.0.0
  • promise-retry ^1.1.1
  • stack-trace 0.0.10
  • typed-duration ^1.0.12
  • uuid ^7.0.3
  • @camunda8/operate ^8.4.0
  • @sitapati/testcontainers ^2.8.1
  • @types/debug 0.0.31
  • @types/got ^9.6.9
  • @types/node ^18.19.3
  • @types/promise-retry ^1.1.3
  • @types/stack-trace 0.0.33
  • @types/uuid ^3.4.4
  • delay ^4.3.0
  • jest ^27.2.3
  • jest-environment-node-debug ^2.0.0
  • typedoc ^0.21.10
  • node >=16.6.1

  • Check this box to trigger a request for Renovate to run again on this repository

Handle Modeler API OAuth across Self-Managed and SaaS

Getting an OAuth token to use the Modeler API has some significant differences in ergonomics between Self-Managed and SaaS that have an impact on the design of the SDK.

Issue 1 - Different credential sets between SaaS and Self-Managed

  • On SaaS, the Modeler API client can be same client credentials as the Admin Console client credential. This is scoped to the organisation, not to a cluster. This means that you cannot have an id/secret pair that accesses both Zeebe and the Modeler.
  • On Self-Managed, the Modeler API client can be the same credential set as a Zeebe client. That client can access both Zeebe and the Modeler.

On SaaS, the env var for an admin console client credential id is CAMUNDA_CONSOLE_CLIENT_ID.
On SaaS, the env var for a cluster application client credential id is ZEEBE_CLIENT_ID.

If the user is accessing Modeler on SaaS, we need to use the CAMUNDA_CONSOLE_CLIENT_ID. We can determine if the application is talking to SaaS by the OAuth URL, which is a known value for SaaS.

If the user is accessing Modeler on Self-Managed (ie: using an OAuth URL other than SaaS), we will use the ZEEBE_CLIENT_ID to request a token. If Modeler is moved out of the application credential pool on Self-Managed in the future, we will deal with it at that point.

Issue 2 - Different audience for token request between SaaS and Self-Managed

  • On SaaS, a token for use with the Modeler API needs to be requested with an audience of api.cloud.camunda.io.
  • On Self-Managed, the audience field should be omitted.

On SaaS, again detected via the OAuth endpoint URL, we will set the audience. On Self-Managed, we will omit the audience field, unless an explicit CAMUNDA_OAUTH_MODELER_AUDIENCE value is set.

@camunda8/operate calls wrong endpoint for flownodes methods

SDK Component

Operate

Expected Behavior

The following methods work

  • searchFlownodeInstances
  • getFlownodeInstance

Current Behavior

It looks like methods don't call correct endpoints

  • searchFlownodeInstances calls the flownodes/search but should call the flownode-instances/search
  • getFlownodeInstance calls the flownodes/${key} but should call the flownode-instances/${key}

Context (Environment)

"@camunda8/operate": "^8.4.0"

Operate.getJSONVariablesforProcess needs to be made lossless

The lossless Json parser from #78 needs to be used for the convenience method getJSONVariablesforProcess.

User variables may contain int64 values.

We should allow the user to optionally pass in a class that extends LosslessDto to be able to get the variable data back with no loss of precision.

As a fall-back (when no user Dto is passed in), we could parse it using lossless-json, and then convert any unsafe numbers into BigInt or string.

Making it a string would make it consistent with how we handle int64 in the SDK - however, without metadata about the payload typing, we are reduced to examining the actual payload and we can only detect int64 when the value exceeds the range of the JS number type.

This means that the type of the field would vary depending on the value - which is strange behaviour. This could lead to application errors where string concatenation could occur when arithmetic addition was expected.

Add CI tests for Multi-tenancy support

A Self-Managed Multi-tenancy stack, running locally or in CI, should function as a test environment for multi-tenancy tests that prove the operation as expected of the SDK in the multi-tenancy environment.

  • Get Web Modeler running in a multi-tenant environment
  • SDK functions fail when run in multi-tenant without tenantId configured
  • SDK functions succeed when run in multi-tenant with tenantId configured

Restructure test workflows

On a push to alpha and main branches, we only need to run the publish workflow.

For PRs, create a single parallelised test without the publish step in it.

Something to think about: we can only run one SaaS test at a time.

Configure semantic release and branching strategy

The SDK package tracks Camunda Platform 8 minor versioning. Feature releases to support current Platform minor version features result in a patch release of the SDK.

The repository uses []semantic-release](https://github.com/semantic-release/semantic-release) to create releases. Because we track the Camunda 8 Platform minor version, we treat feature implementation during a minor release cycle as a patch release rather than a minor release.

Creating a commit with a feat commit message will cause the package version patch release number to increment. To update the minor version, a commit with the type minor is needed.

A commit with the type release will trigger a patch release even if there are no features or fixes. This can be used on the alpha branch to test automation if needed.

Normalise API int64 type across Node SDK

The JSON number value passed by the REST API for processInstanceKey (for example) is of type $int64.

This can't be reliably represented by the JavaScript number type, which is a 2^53 range floating point value.

This means that systems that are running for a long time can generate keys that cannot be represented by the SDK.

The gRPC library deals with this by parsing int64 to a string representation. Since we don't do arithmetic on keys, it might as well be a string.

The REST library that I am currently using, however, (got) represents these as number, leading to two issues:

  1. The imprecision issue that will become an L1 in production when a system hits key values that cannot be represented as the JS number type.

  2. The impedance mismatch that ZeebeGrpcClient returns long keys as type string, and the other APIs expect a number type as input; and vice versa.

I'm looking for alternative REST clients that can serialise long integer types as JavaScript string.

Add Zeebe Tasklist REST API

The Tasklist API is moving to the Zeebe Gateway. See here: camunda/camunda#15622

How this is implemented in the SDK will depend on the API surface area that we expose to developers.

It may be possible to do this with no change to the API surface area.

This will be targeted to post-8.5.

Archive deprecated repositories and packages

The following repositories need to be archived:

Packages that need to be deprecated with a notice:

  • @camunda8/console
  • @camunda8/tasklist
  • @camunda8/oauth
  • @camunda8/operate
  • @camunda8/optimize
  • @camunda8/modeler
  • @camunda8/zeebe
  • camunda-8-credentials-from-env
  • camunda-8-sdk
  • camunda-saas-oauth
  • zeebe-node

Refactor Dto in ZeebeGrpcClient

This one is a stretch goal for the release.

The new clients have a clean structure with all interfaces in a Dto namespace.

The ZeebeGrpcClient has interfaces all over the place. It will be a significant documentation win to coalesce them all into a Dto namespace.

Add MigrateProcessInstance API support

A future release of Zeebe will support migrating a running process instance to a newer version of the process model. Simple process migration is targeted for 8.4.

The client API for this has been added in stub form with this PR: camunda/camunda#15199.

Add this API stub to the Zeebe Node client.

Add "Fully Supported" and definition to README

What does "supported" mean?

This is the official supported-by-Camunda Nodejs SDK for Camunda Platform 8.

The Node.js SDK will not always support all features of Camunda Platform 8 immediately upon their release. Complete API coverage for a platform release will lag behind the platform release.

Prioritisation of implementing features is influenced by customer demand.

Test new SDK with Desktop Modeler

This is a smoke test for the release.

Replace the zeebe-node dependency in the Desktop Modeler with the new SDK, and raise a PR for Desktop Modeler.

Error: Cannot find module 'lib' in require stack...

Hi ! I know this version is in alpha, but I wanted to submit my issue in case of.
An error is thrown when starting a NestJS project with camunda8/[email protected] or camunda8/[email protected]

Error: Cannot find module 'lib'
Require stack:
- /home/xxx/workspace/projects/edl/poc/camunda/node_modules/.pnpm/@[email protected]/node_modules/@camunda8/sdk/dist/admin/lib/AdminApiClient.js
- /home/xxx/workspace/projects/edl/poc/camunda/node_modules/.pnpm/@[email protected]/node_modules/@camunda8/sdk/dist/admin/index.js
- /home/xxx/workspace/projects/edl/poc/camunda/node_modules/.pnpm/@[email protected]/node_modules/@camunda8/sdk/dist/index.js
- /home/xxx/workspace/projects/edl/poc/camunda/dist/process.service.js
- /home/xxx/workspace/projects/edl/poc/camunda/dist/app.controller.js
- /home/xxx/workspace/projects/edl/poc/camunda/dist/app.module.js
- /home/xxx/workspace/projects/edl/poc/camunda/dist/main.js

SDK Component

All

Expected Behavior

The 'lib' is found

Current Behavior

The 'lib' is not found

Possible Solution

the path of 'lib' (or any other require() in the compiled js) seems to target a file in a same folder.

Steps to Reproduce

  1. nest new my-project
  2. Call the camunda8 constructor anywhere (ex: const camunda = new Camunda8() in the controller)
  3. npm run start:dev

Context (Environment)

NodeJS 20
NestJS@10 project
PNPM

Add Slack notifications for workflows

Workflow fragment:

  Post-Failure:
    needs: Build_nightly
    if: failure()
    runs-on: ubuntu-latest
    steps:
    - name: Post to a Slack channel
      uses: slackapi/[email protected]
      with:
        channel-id: ${{ secrets.SLACK_CHANNEL_ID }}
        slack-message: "Nightly build failed. <https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}|Go to the build.>"
      env:
        SLACK_BOT_TOKEN: ${{ secrets.SLACK_BOT_TOKEN }}

Why doesn't the SDK run in the web browser?

Here are the things that need to be dealt with:

  • The web browser does not support gRPC
  • Web browsers cannot easily support custom SSL certificates
  • Secrets cannot be secured in the browser

These three mean that the web browser environment requires distinct strategies from the server-side environment.

Creating and maintaining differential strategies for the two environments (client- and server-side) is a significant engineering effort.

Intermittent 401 unauthorised in integration tests

"Randomly", a test will fail in the suite with a 401 Unauthorised response from an API.

Example:

 console.log
    Failed to search for process instances for 2251799813685806

      at src/zeebe/lib/cancelProcesses.ts:16:12

  console.log
    HTTPError: Response code 401 (Unauthorized) (request to http://localhost:8081/v1/process-instances/search)
        at Request.<anonymous> (/Users/jwulf/workspace/c8-sdk/node_modules/got/dist/source/as-promise/index.js:118:42)

This is probably due to an expired token being used for the request.

To debug this, I'll need to write a lifecycle unit test for the OAuth token expiry strategy.

Unify debug namespace

The SDK uses the debug library for debugging. Users can set a DEBUG environment variable to enable debugging information to help trace issues.

Make all the values normalized and document them.

camunda:tasklist
camunda:oauth
camunda:grpc
camunda:worker
camunda:modeler
camunda:adminconsole
camunda:optimize
camunda:zeebeclient
camunda:operate

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.