Giter VIP home page Giter VIP logo

llrt's Introduction

LLRT CI LLRT Release

LLRT (Low Latency Runtime) is a lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications. LLRT offers up to over 10x faster startup and up to 2x overall lower cost compared to other JavaScript runtimes running on AWS Lambda

It's built in Rust, utilizing QuickJS as JavaScript engine, ensuring efficient memory usage and swift startup.

Warning

LLRT is an experimental package. It is subject to change and intended only for evaluation purposes.

LLRT - DynamoDB Put, ARM, 128MB: DynamoDB Put LLRT

Node.js 20 - DynamoDB Put, ARM, 128MB: DynamoDB Put Node20

HTTP benchmarks measured in round trip time for a cold start (why?)

Configure Lambda functions to use LLRT

Download the last LLRT release from https://github.com/awslabs/llrt/releases

Option 1: Custom runtime (recommended)

Choose Custom Runtime on Amazon Linux 2023 and package the LLRT bootstrap binary together with your JS code.

Option 2: Use a layer

Choose Custom Runtime on Amazon Linux 2023, upload llrt-lambda-arm64.zip or llrt-lambda-x64.zip as a layer and add to your function

Option 3: Package LLRT in a container image

See our AWS SAM example or:

FROM --platform=arm64 busybox
WORKDIR /var/task/
COPY app.mjs ./
ADD https://github.com/awslabs/llrt/releases/latest/download/llrt-container-arm64 /usr/bin/llrt
RUN chmod +x /usr/bin/llrt

ENV LAMBDA_HANDLER "app.handler"

CMD [ "llrt" ]

Option 4: AWS SAM

The following example project sets up a lambda instrumented with a layer containing the llrt runtime.

Option 5: AWS CDK

You can use cdk-lambda-llrt construct library to deploy LLRT Lambda functions with AWS CDK.

import { LlrtFunction } from "cdk-lambda-llrt";

const handler = new LlrtFunction(this, "Handler", {
  entry: "lambda/index.ts",
});

See Construct Hub and its examples for more details.

That's it 🎉

Important

Even though LLRT supports ES2023 it's NOT a drop in replacement for Node.js. Consult Compatibility matrix and API for more details. All dependencies should be bundled for a browser platform and mark included @aws-sdk packages as external.

Testing & ensuring compatibility

The best way to ensure your code is compatible with LLRT is to write tests and execute them using the built-in test runner. The test runner currently supports Jest/Chai assertions. There are two main types of tests you can create:

Unit Tests

  • Useful for validating specific modules and functions in isolation
  • Allow focused testing of individual components

End-to-End (E2E) Tests

  • Validate overall compatibility with AWS SDK and WinterCG compliance
  • Test the integration between all components
  • Confirm expected behavior from end-user perspective

For more information about the E2E Tests and how to run them, see here.

Test runner

Test runner uses a lightweight Jest-like API and supports Jest/Chai assertions. For examples on how to implement tests for LLRT see the /tests folder of this repository.

To run tests, execute the llrt test command. LLRT scans the current directory and sub-directories for files that ends with *.test.js or *.test.mjs. You can also provide a specific test directory to scan by using the llrt test -d <directory> option.

The test runner also has support for filters. Using filters is as simple as adding additional command line arguments, i.e: llrt test crypto will only run tests that match the filename containing crypto.

Compatibility matrix

Note

LLRT only support a fraction of the Node.js APIs. It is NOT a drop in replacement for Node.js, nor will it ever be. Below is a high level overview of partially supported APIs and modules. For more details consult the API documentation

Node.js LLRT ⚠️
buffer ✔︎ ✔︎️
streams ✔︎ ✔︎*
child_process ✔︎ ✔︎⏱
net:sockets ✔︎ ✔︎⏱
net:server ✔︎ ✔︎
tls ✔︎ ✘⏱
fetch ✔︎ ✔︎
http ✔︎ ✘⏱**
https ✔︎ ✘⏱**
fs/promises ✔︎ ✔︎
fs ✔︎ ✘⏱
path ✔︎ ✔︎
timers ✔︎ ✔︎
crypto ✔︎ ✔︎
process ✔︎ ✔︎
encoding ✔︎ ✔︎
console ✔︎ ✔︎
events ✔︎ ✔︎
zlib ✔︎ ✔︎
ESM ✔︎ ✔︎
CJS ✔︎ ✔︎
async/await ✔︎ ✔︎
Other modules ✔︎

⚠️ = partially supported in LLRT ⏱ = planned partial support * = Not native ** = Use fetch instead

Using node_modules (dependencies) with LLRT

Since LLRT is meant for performance critical application it's not recommended to deploy node_modules without bundling, minification and tree-shaking.

LLRT can work with any bundler of your choice. Below are some configurations for popular bundlers:

Warning

LLRT implements native modules that are largely compatible with the following external packages. By implementing the following conversions in the bundler's alias function, your application may be faster, but we recommend that you test thoroughly as they are not fully compatible.

Node.js LLRT
fast-xml-parser llrt:xml
uuid llrt:uuid

ESBuild

esbuild index.js --platform=node --target=es2023 --format=esm --bundle --minify --external:@aws-sdk --external:@smithy

Rollup

import resolve from "@rollup/plugin-node-resolve";
import commonjs from "@rollup/plugin-commonjs";
import terser from "@rollup/plugin-terser";

export default {
  input: "index.js",
  output: {
    file: "dist/bundle.js",
    format: "esm",
    sourcemap: true,
    target: "es2023",
  },
  plugins: [resolve(), commonjs(), terser()],
  external: ["@aws-sdk", "@smithy"],
};

Webpack

import TerserPlugin from "terser-webpack-plugin";
import nodeExternals from "webpack-node-externals";

export default {
  entry: "./index.js",
  output: {
    path: "dist",
    filename: "bundle.js",
    libraryTarget: "module",
  },
  target: "web",
  mode: "production",
  resolve: {
    extensions: [".js"],
  },
  externals: [nodeExternals(), "@aws-sdk", "@smithy"],
  optimization: {
    minimize: true,
    minimizer: [
      new TerserPlugin({
        terserOptions: {
          ecma: 2023,
        },
      }),
    ],
  },
};

Using AWS SDK (v3) with LLRT

LLRT includes many AWS SDK clients and utils as part of the runtime, built into the executable. These SDK Clients have been specifically fine-tuned to offer best performance while not compromising on compatibility. LLRT replaces some JavaScript dependencies used by the AWS SDK by native ones such as Hash calculations and XML parsing. V3 SDK packages not included in the list below have to be bundled with your source code. For an example on how to use a non-included SDK, see this example build script (buildExternalSdkFunction)

Bundled AWS SDK packages
@aws-sdk/client-cloudwatch-events
@aws-sdk/client-cloudwatch-logs
@aws-sdk/client-cognito-identity
@aws-sdk/client-cognito-identity-provider
@aws-sdk/client-dynamodb
@aws-sdk/client-eventbridge
@aws-sdk/client-kms
@aws-sdk/client-lambda
@aws-sdk/client-s3
@aws-sdk/client-secrets-manager
@aws-sdk/client-ses
@aws-sdk/client-sfn
@aws-sdk/client-sns
@aws-sdk/client-sqs
@aws-sdk/client-ssm
@aws-sdk/client-sts
@aws-sdk/client-xray
@aws-sdk/credential-providers
@aws-sdk/lib-dynamodb
@aws-sdk/lib-storage
@aws-sdk/s3-presigned-post
@aws-sdk/s3-request-presigner
@aws-sdk/util-dynamodb
@aws-sdk/util-user-agent-browser
@smithy
@aws-crypto

Important

LLRT currently does not support returning streams from SDK responses. Use response.Body.transformToString(); or response.Body.transformToByteArray(); as shown below.

const response = await client.send(command);
// or 'transformToByteArray()'
const str = await response.Body.transformToString();

Running TypeScript with LLRT

Same principle as dependencies applies when using TypeScript. TypeScript must be bundled and transpiled into ES2023 JavaScript.

Note

LLRT will not support running TypeScript without transpilation. This is by design for performance reasons. Transpiling requires CPU and memory that adds latency and cost during execution. This can be avoided if done ahead of time during deployment.

Rationale

What justifies the introduction of another JavaScript runtime in light of existing options such as Node.js, Bun & Deno?

Node.js, Bun, and Deno represent highly proficient JavaScript runtimes. However, they are designed with general-purpose applications in mind. These runtimes were not specifically tailored for the demands of a Serverless environment, characterized by short-lived runtime instances. They each depend on a (Just-In-Time compiler (JIT) for dynamic code compilation and optimization during execution. While JIT compilation offers substantial long-term performance advantages, it carries a computational and memory overhead.

In contrast, LLRT distinguishes itself by not incorporating a JIT compiler, a strategic decision that yields two significant advantages:

A) JIT compilation is a notably sophisticated technological component, introducing increased system complexity and contributing substantially to the runtime's overall size.

B) Without the JIT overhead, LLRT conserves both CPU and memory resources that can be more efficiently allocated to code execution tasks, thereby reducing application startup times.

Limitations

There are many cases where LLRT shows notable performance drawbacks compared with JIT-powered runtimes, such as large data processing, Monte Carlo simulations or performing tasks with hundreds of thousands or millions of iterations. LLRT is most effective when applied to smaller Serverless functions dedicated to tasks such as data transformation, real time processing, AWS service integrations, authorization, validation etc. It is designed to complement existing components rather than serve as a comprehensive replacement for everything. Notably, given its supported APIs are based on Node.js specification, transitioning back to alternative solutions requires minimal code adjustments.

Building from source

Clone code and cd to directory

git clone [email protected]:awslabs/llrt.git --recursive
cd llrt

Install git submodules if you've not cloned the repository with --recursive

git submodule update --init

Install rust

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | bash -s -- -y
source "$HOME/.cargo/env"

Install dependencies

# MacOS
brew install zig make cmake zstd node corepack

# Ubuntu
sudo apt -y install make zstd
sudo snap install zig --classic --beta

# Windows WSL2
sudo apt -y install cmake g++ gcc make zip zstd
sudo snap install zig --classic --beta

# Windows WSL2 (If Node.js is not yet installed)
sudo curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash
nvm install --lts

Install Node.js packages

corepack enable
yarn

Install generate libs and setup rust targets & toolchains

make stdlib && make libs

Note

If these commands exit with an error that says can't cd to zstd/lib, you've not cloned this repository recursively. Run git submodule update --init to download the submodules and run the commands above again.

Build release for Lambda

make release-arm64
# or for x86-64, use
make release-x64

Optionally build for your local machine (Mac or Linux)

make release

You should now have a llrt-lambda-arm64.zip or llrt-lambda-x64.zip. You can manually upload this as a Lambda layer or use it via your Infrastructure-as-code pipeline

Running Lambda emulator

Please note that in order to run the example you will need:

  • Valid AWS credentials via a ~/.aws/credentials or via environment variables.
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=YYY
export AWS_REGION=us-east-1
  • A DynamoDB table (with id as the partition key) on us-east-1
  • The dynamodb:PutItem IAM permission on this table. You can use this policy (don't forget to modify <YOUR_ACCOUNT_ID>):
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "putItem",
      "Effect": "Allow",
      "Action": "dynamodb:PutItem",
      "Resource": "arn:aws:dynamodb:us-east-1:<YOUR_ACCOUNT_ID>:table/quickjs-table"
    }
  ]
}

Start the lambda-server.js in a separate terminal

node lambda-server.js

Then run llrt:

make run

Environment Variables

LLRT_EXTRA_CA_CERTS=file

Load extra certificate authorities from a PEM encoded file

LLRT_GC_THRESHOLD_MB=value

Set a memory threshold in MB for garbage collection. Default threshold is 20MB

LLRT_HTTP_VERSION=value

Restrict HTTP requests to use a specific version. By default HTTP 1.1 and 2 are enabled. Set this variable to 1.1 to only use HTTP 1.1

LLRT_LOG=[target][=][level][,...]

Filter the log output by target module, level, or both (using =). Log levels are case-insensitive and will also enable any higher priority logs.

Log levels in descending priority order:

  • Error
  • Warn | Warning
  • Info
  • Debug
  • Trace

Example filters:

  • warn will enable all warning and error logs
  • llrt_core::vm=trace will enable all logs in the llrt_core::vm module
  • warn,llrt_core::vm=trace will enable all logs in the llrt_core::vm module and all warning and error logs in other modules

LLRT_NET_ALLOW="host[ ...]"

Space-delimited list of hosts or socket paths which should be allowed for network connections. Network connections will be denied for any host or socket path missing from this list. Set an empty list to deny all connections

LLRT_NET_DENY="host[ ...]"

Space-delimited list of hosts or socket paths which should be denied for network connections

LLRT_NET_POOL_IDLE_TIMEOUT=value

Set a timeout in seconds for idle sockets being kept-alive. Default timeout is 15 seconds

LLRT_TLS_VERSION=value

Set the TLS version to be used for network connections. By default only TLS 1.2 is enabled. TLS 1.3 can also be enabled by setting this variable to 1.3

Benchmark Methodology

Although Init Duration reported by Lambda is commonly used to understand cold start impact on overall request latency, this metric does not include the time needed to copy code into the Lambda sandbox.

The technical definition of Init Duration (source):

For the first request served, the amount of time it took the runtime to load the function and run code outside of the handler method.

Measuring round-trip request duration provides a more complete picture of user facing cold-start latency.

Lambda invocation results (λ-labeled row) report the sum total of Init Duration + Function Duration.

Security

See CONTRIBUTING for more information.

License

This library is licensed under the Apache-2.0 License. See the LICENSE file.

llrt's People

Contributors

ahaoboy avatar calavera avatar dependabot[bot] avatar floydspace avatar fredbonin avatar georgesmith46 avatar imaitland avatar kevinmingtarja avatar kikobeats avatar kirakernel avatar kylegrabfelder avatar marukome0743 avatar maxday avatar mertalev avatar nabetti1720 avatar neeraj-ghodla avatar nikp avatar nsalerni avatar p0wl avatar refi64 avatar richarddavison avatar shulandmimi avatar simon0191 avatar stephencroberts avatar sumeet avatar syrusakbary avatar sytten avatar tmokmss avatar watany-dev avatar yollotltam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llrt's Issues

EventEmitter cannot be default-imported

in Node.js both of these imports are valid and work:

import EventEmitter from 'events'
import { EventEmitter } from 'events'

const a = new EventEmitter()

however in llrt (0.1.5-beta) the former fails:

// llrt-test.mjs
import EventEmitter from 'events'

const a = new EventEmitter()
$ ./llrt llrt-test.mjs
TypeError: not a constructor
    at <anonymous> (llrt-test.mjs:3:11)

T

U.

Read from stdin and write to stdout

Is there a plan to support reading from stdin and writing to stdout? For instance in BunJS, I can do streaming:

for await (let chunk of Bun.stdin.stream()) {}
const writer = Bun.stdout.writer();

Unable to build on aarch64 Raspberry Pi

I'm hitting the following error on the compile step for ring:

warning: [email protected]: error: unable to parse target query 'aarch64-unknown-linux-musl': UnknownOperatingSystem
warning: [email protected]: ToolExecError: Command "/home/moderation/Library/llrt/linker/cc-aarch64-linux-musl" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "--target=aarch64-unknown-linux-musl" "-I" "include" "-I" "/home/moderation/Library/llrt/target/aarch64-unknown-linux-musl/release/build/ring-27e1b34e3efd1c61/out" "-Wall" "-Wextra" "-fvisibility=hidden" "-std=c1x" "-Wall" "-Wbad-function-cast" "-Wcast-align" "-Wcast-qual" "-Wconversion" "-Wmissing-field-initializers" "-Wmissing-include-dirs" "-Wnested-externs" "-Wredundant-decls" "-Wshadow" "-Wsign-compare" "-Wsign-conversion" "-Wstrict-prototypes" "-Wundef" "-Wuninitialized" "-g3" "-nostdlibinc" "-DNDEBUG" "-DRING_CORE_NOSTDLIBINC=1" "-o" "/home/moderation/Library/llrt/target/aarch64-unknown-linux-musl/release/build/ring-27e1b34e3efd1c61/out/fad98b632b8ce3cc-curve25519.o" "-c" "crypto/curve25519/curve25519.c" with args "cc-aarch64-linux-musl" did not execute successfully (status code exit status: 1).running: "/home/moderation/Library/llrt/linker/cc-aarch64-linux-musl" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "--target=aarch64-unknown-linux-musl" "-I" "include" "-I" "/home/moderation/Library/llrt/target/aarch64-unknown-linux-musl/release/build/ring-27e1b34e3efd1c61/out" "-Wall" "-Wextra" "-fvisibility=hidden" "-std=c1x" "-Wall" "-Wbad-function-cast" "-Wcast-align" "-Wcast-qual" "-Wconversion" "-Wmissing-field-initializers" "-Wmissing-include-dirs" "-Wnested-externs" "-Wredundant-decls" "-Wshadow" "-Wsign-compare" "-Wsign-conversion" "-Wstrict-prototypes" "-Wundef" "-Wuninitialized" "-g3" "-nostdlibinc" "-DNDEBUG" "-DRING_CORE_NOSTDLIBINC=1" "-o" "/home/moderation/Library/llrt/target/aarch64-unknown-linux-musl/release/build/ring-27e1b34e3efd1c61/out/ca4b6ef5433f5aeb-aes_nohw.o" "-c" "crypto/fipsmodule/aes/aes_nohw.c"

I think the error is related to briansmith/ring#563 (comment) however none of the work arounds are helping.

Not sure if related but downloading the latest release binary for Linux `aarch64 results in a crash:

llrt --version
fish: Job 1, 'llrt --version' terminated by signal SIGILL (Illegal instruction)

Upgrade to Hyper 1.0

Hyper 1.0 now features a lower level API that requires a handling of handshakes, tcp-connections and reading data from frames.

LLRT should use this lower level API to be able to support behaviour similar to http and https modules from node.

Switching to a lower level API also enables switch to AWS-LibCrypto to improve TLS handshake time.

The downside would be that we now need to implement a connection pool since that was previously handled by hyper.

However, we can reuse this connection pool from both fetch, http/https and raw sockets for better performance. This is not possible in the current implementation.

References:
https://docs.rs/aws-lc-rs - AWS LibCrypto
https://docs.rs/deadpool/0.10.0/deadpool/ - General Purpose Connection Pool

Tasks

No tasks being tracked yet.

Crypto.randomBytes returns Uint8Array instead of Buffer

Say we have the following code:

// index.mjs
import { randomBytes } from "crypto";
console.log(randomBytes(10).toString("base64"));
console.log(randomBytes(10));

It does not return the same result (aside from randomness) as Node.js

node index.mjs
# base64 string
3HJqTvKlThpveA==
<Buffer ff 68 8c 61 7d 57 32 8b 00 36>


llrt index.mjs
# just an array
199,143,52,135,209,139,167,219,9,14
Uint8Array [ 169, 68, 55, 29, 169, 125, 37, 213, 114, 125 ]

It seems the returned type from randomBytes is different between llrt and node.

Unable to import `console` module

Hi, we have received a report from a customer that LLRT is not compatible with Powertools for AWS Lambda (TypeScript) (aws-powertools/powertools-lambda-typescript#2050.

From an initial test we were able to narrow down the issue to the usage of the console module. Below a code snippet that results in an exception:

import { Console } from "node:console";

const con = new Console({ stdout: process.stdout, stderr: process.stderr });

export const handler = async (event: any) => {
  con.log("Hello, world!");
  return "Hello, world!";
};

Full error below:

  24-02-12T15:15:47.833Z        n/a     ERROR   ReferenceError: Error resolving module 'console' from '/var/task/index.mjs'
} stackTrace: [ '' ]or resolving module 'console' from '/var/task/index.mjs'',
INIT_REPORT Init Duration: 40.23 ms     Phase: init     Status: error   Error Type: Runtime.ExitError
  24-02-12T15:15:48.350Z        n/a     ERROR   ReferenceError: Error resolving module 'console' from '/var/task/index.mjs'
} stackTrace: [ '' ]or resolving module 'console' from '/var/task/index.mjs'',
INIT_REPORT Init Duration: 523.69 ms    Phase: invoke   Status: error   Error Type: Runtime.ExitError
START RequestId: 36e99bfc-806a-4326-963b-354ee212c545 Version: $LATEST
END RequestId: 36e99bfc-806a-4326-963b-354ee212c545
REPORT RequestId: 36e99bfc-806a-4326-963b-354ee212c545  Duration: 551.40 ms     Billed Duration: 551 ms Memory Size: 128 MB     Max Memory Used: 21 MB  Status: error      Error Type: Runtime.ExitError

The console module is listed as supported in the compatibility matrix in your readme but doesn't appear in the API docs document.

Any chance that you could confirm whether the module is expected to work? If not, we'd like to raise a feature request for it to be eventually implemented on behalf of this customer.

Are dependencies via lambda layers (e.g. Sentry) supposed to work?

We are using Sentry via a lambda layer in our application. See here. Layer arn: arn:aws:lambda:eu-central-1:943013980633:layer:SentryNodeServerlessSDK:193

The @sentry/serverless package is imported in our app like this:
import { AWSLambda } from '@sentry/serverless'; / var import_serverless = require("@sentry/serverless"); (after esbuild)

In our cdk build step, we exclude it via external modules:

externalModules: ['@sentry/serverless', ...]

but with LLRT, we get the following error when the lambda function is invoked:

ERROR	{
  errorType: 'ReferenceError',
  errorMessage: 'Error resolving module '@sentry/serverless' from '/var/task/index.mjs'',
  stackTrace: [ '' ]
}

If I remove @sentry/serverless from externalModules, the error disappears, so I guess either the layer is not getting picked up by LLRT.

FS with callbacks

Since all fs methods are async in LLRT implementing callbacks for fs is trivial. They could be wrapped in a "callbackify" utility that will await the future and call a function with cb(err,result) rather than awaiting the result.

Size

S

process.versions is undefined

I wanted to try LLRT via https://github.com/tmokmss/cdk-lambda-llrt, but the lambda execution fails on initialization, as a library we use (pg-promise) relies on process.versions.node to be set.

In LLRT process.version is set, but process.versions is not (see nodejs docs).

I can see that llrt would not return process.versions.node with a valid nodejs version number (since it is not a nodejs version), but LLRT should probably return some kind of object for process.versions at least, right?

Thanks for your work!

Error unable to load module 'url'

Currently I use esbuild to build the main.js file, however it seems the url module is not supported and an error appears as below. Please help.

Response
{
"errorType": "ReferenceError",
"errorMessage": "Error resolving module 'url' from '/var/task/dist/main.js'",
"stackTrace": [
""
]
}

SourceMap Support

Sourcemaps are currently not supported meaning that stack traces will be from transpiled JS sources. This is not ideal for debugging purposes.

The parcel bundler has implemented support for source maps in a rust package. Its not available on crates.io but the logic of deserializing a sourcemap is not very complicated.

https://github.com/parcel-bundler/source-map/tree/master/parcel_sourcemap

Here is explanation of sourcemaps:
https://www.bugsnag.com/blog/source-maps/

If this could be implemented without pulling in serde json dependency it would be ideal since the QuickJS engine already has a build in json parser

Will llrt be a WinterCG Runtime?

https://wintercg.org/

Many cloud providers and other groups in the industry are aligning on standards for new non-node based runtimes.

Will llrt attempt to follow specifications created by the WinterCG?

Library authors are starting to need to consider differences between many Javascript runtimes, aligning on standards for these new runtimes will help ensure consistency and compatibility.

Implement net socket connections

Description

Started implementing basic support for sockets but this was removed due to a major version upgrade of the QuickJS binding layer.

Since then spawn from child_process have been implemented and is using a native readable/writable stream for reading from stdin, stderr and stdout. This readable/writable stream could be used for sockets as well as it uses the same interface for communication (event emitter)

Size

L

provide @types/llrt .d.ts files for built in modules

Expressing the Node.js compatibility matrix and API guide as typescript .d.ts files would let users statically check that they are only using supported features, and provide inline IDE documentation.

Users could rely on @types/node but would have to guess at the target version (is it 20, 18, 22?) and remember which APIs aren't supported or rely on tests.

why not hermes?

I note that hermes isn't mentioned in your "why not an existing solution" section. It'd be great to have an analysis of that one, since its goals seem much more aligned with LLRT's.

Windows Support

It's mainly the node build script that is relying on unix paths at the moment. Path.join is platform specific (which is strange since windows supports forward slashes since forever)

Built-in @aws-sdk/client-s3 Body transform* functions transform self rather than body

This was working in either 0.1.6 or 0.1.7, but with 0.1.8 this code

        const command = new GetObjectCommand({
                Bucket: BUCKET,
                Key: "index.html",
            })
        const response = await s3.send(command);
        console.log({
            response,
            bodyString: await response.Body.transformToString(),
            bodyArray: await response.Body.transformToByteArray()
        })

logs

{
  response: {
    $metadata: { httpStatusCode: 200, ..., cfId: undefined, attempts: 1, totalRetryDelay: 0 },
    AcceptRanges: 'bytes',
    LastModified: 2024-02-21T15:09:47.000Z,
    ContentLength: 582,
    ...
    CacheControl: 'no-cache,max-age=0,public',
    ContentType: 'text/html',
    ServerSideEncryption: 'AES256',
    Metadata: {},
    Body: { transformToByteArray: [function: Me], transformToString: [function: Re], transformToWebStream: [function: Ie] }
  },
  bodyString: '[object Object]',
  bodyArray: [Circular]
}

Seems like the transform* functions are operating on the Body object with functions, rather than on the responses actual Body.

S3: "errorMessage": "not a function",

Hello,

A perfectly Node20 Lambda can upload a file into S3
When I switch to LLRT, I cannot do it anymore.

SAM TEMPLATE:

UploadToS3:
    Type: AWS::Serverless::Function
    Properties:
      Timeout: 10
      Architectures: ["arm64"]
      MemorySize: 128
      Runtime: provided.al2
      # MemorySize: 1024
      # Runtime: nodejs20.x
      .....
   Metadata:
      BuildMethod: esbuild
      BuildProperties:
        External:
          - '@aws-sdk/client-s3'
        Minify: true
        Target: "es2020"
        Sourcemap: false
        Format: esm
        OutExtension:
          - .js=.mjs
        EntryPoints:
        - uploadToS3.ts

LAMBDA:

import { S3Client, ClientDefaults, PutObjectCommand, PutObjectCommandInput, ObjectCannedACL } from "@aws-sdk/client-s3";

const clientDefaults: ClientDefaults = {
  region: process.env.AWS_REGION,
};
const s3Client = new S3Client(clientDefaults);

export const handler = async (event) => {
  console.debug("event", JSON.stringify(event));

  try {
    const jsonFile = {
      "data": event,
    }
    const params: PutObjectCommandInput = {
      Bucket: process.env.BUCKET_NAME,
      Key: `path/${event.id}.json`,
      Body: JSON.stringify(jsonFile),
      ACL: ObjectCannedACL.bucket_owner_full_control,
      ContentType: "application/json",
    };
    const command = new PutObjectCommand(params);
    await s3Client.send(command);
  } catch (error) {
    console.error(error);
  }
};

LOG:

{
  "errorType": "TypeError",
  "errorMessage": "not a function",
  "stackTrace": [
    "    at C (/var/task/uploadToS3.mjs:1:159)",
    "    at startProcessEvents (@llrt/runtime:5:159)",
    ""
  ],
  "requestId": "c03887d1-66d8-476b-ada6-cd8e1aaadfc6"
}

{
  "time": "2023-11-25T08:34:30.527Z",
  "type": "platform.runtimeDone",
  "record": {
    "requestId": "c03887d1-66d8-476b-ada6-cd8e1aaadfc6",
    "status": "success",
    "metrics": {
      "durationMs": 2.25,
      "producedBytes": 0
    }
  }
}
{
  "time": "2023-11-25T08:34:30.529Z",
  "type": "platform.report",
  "record": {
    "requestId": "c03887d1-66d8-476b-ada6-cd8e1aaadfc6",
    "status": "success",
    "metrics": {
      "durationMs": 2.556,
      "billedDurationMs": 44,
      "memorySizeMB": 128,
      "maxMemoryUsedMB": 29,
      "initDurationMs": 40.651
    }
  }
}

Full SDK Binaries

In order to keep the binary size to a minimum only a subset clients of the full AWS SDK v3 is bundled currently.
Multiple binaries

We could via feature flags build multiple flavors of LLRT:

  • llrt-full - The full SDK bundled
  • llrt-min - No SDK bundled
  • llrt - The default version (current) with the most popular packages bundled

Futhermore, we could allow customers to download the bundled and optimized SDKs as .lrt files. .lrt files is zstd compressed QuickJS bytecode that can be imported in llrt (already supported)

Size

L

'Response' is not defined?

I noticed that the Response object, which is strangely defined globally in the source code1, cannot be referenced during the execution of llrt. I found this while writing a simple web server.

I can see this in the following way:

# https://github.com/awslabs/llrt/releases/download/v0.1.6-beta/llrt-darwin-arm64.zip
llrt -e 'new Request("http://localhost:3000/") && console.log("OK")'

OK
llrt -e 'new Response() && console.log("OK")'

ReferenceError: 'Response' is not defined
    at <eval> (eval_script:1:5)

Footnotes

  1. https://github.com/awslabs/llrt/blob/f0a4983d4fe45890f037741a66ee53707d5adcb5/src/http/mod.rs#L16-L28

top-level await throws SyntaxError

basically the subject

// llrt-test.mjs
function foo() {
  return Promise.resolve(42)
}

console.log(await foo())
$ ./llrt llrt-test.mjs
SyntaxError: unexpected 'await' keyword
    at llrt-test.mjs:5:12

Error Message Inconsistency: 'Could not create todo' should be 'Could not delete todo'

throw new Error("Could not create todo");

Screenshot 2024-02-13 at 00 48 31

I found the error message is not correct in /example/functions/src/react/TodoList.tsx. I have attached the line number and screenshot as well. Instead of 'Could not create todo' error message, it should be 'Could not delete todo'.

#25

Tasks

No tasks being tracked yet.

Implement native streams

Currently Readable & Writable streams are only available from a polyfilled JavScript module and is not used inside the SDK or for Fetch calls for performance and latency reasons.

LLRT should implement Readable-, Writable-, Duplex- and Transfer streams nativly in Rust. This would allow performant stream processing, increase throughput and be more memory efficient as users can pipe data through streams.

However, the streams API is gigantic. The polyfilled streams module is almost 6500 lines of JS supporting a lot of edge cases. There is also some difference between Node.js Streams and WebStreams.

A very basic initial implementation is currently implemented for child_process and socket module that can be expanded to support a richer Streams API.

segfault when calling `os.release()`

running the following

// llrt-test.mjs
console.log(require('os').release())

results in a segmentation fault:

$ ./llrt llrt-test.mjs
[1]    76305 segmentation fault  ./llrt llrt-test.mjs

LLRT (darwin arm64) 0.1.5-beta
macOS 14.2.1 on MacBook Pro (M1, 2021)

Invalid JSON serialization of dates in v0.1.4+

Since version v0.1.4 and onward through the current (v0.1.7) release, JSON serialization of objects that include a Date property result in improperly formatted JSON, with duplicate rendering of keys.

Using v0.1.7-beta release on MacOS ARM64 (and confirmed in Lambda ARM64):

date-test.js

const value = {
  date: new Date()
}

console.log(JSON.stringify(value))

Expected output: {"date":"2024-02-20T17:13:17.908Z"}
Received output: {"date":"date":"2024-02-20T17:14:06.578Z"} which is invalid JSON (note the doubling of the "date")

Tested backwards and this appears to work correctly in v0.1.3, and to be broken in subsequent versions

% ./llrt-0.1.3 -v && ./llrt-0.1.3 date-test.js
LLRT (darwin arm64) 0.1.3-beta
{"date":"2024-02-20T17:15:41.779Z"}
% ./llrt-0.1.4 -v && ./llrt-0.1.4 date-test.js
LLRT (darwin arm64) 0.1.4-beta
{"date":"date":"2024-02-20T17:15:51.029Z"}
% ./llrt-0.1.5 -v && ./llrt-0.1.5 date-test.js
LLRT (darwin arm64) 0.1.5-beta
{"date":"date":"2024-02-20T17:15:57.396Z"}
% ./llrt-0.1.6 -v && ./llrt-0.1.6 date-test.js
LLRT (darwin arm64) 0.1.6-beta
{"date":"date":"2024-02-20T17:16:03.466Z"}
% ./llrt-0.1.7 -v && ./llrt-0.1.7 date-test.js
LLRT (darwin arm64) 0.1.7-beta
{"date":"date":"2024-02-20T17:16:08.797Z"}

And just for comparison, how it works in node:

% node -v && node date-test.js
v20.11.0
{"date":"2024-02-20T17:19:23.854Z"}

ReferenceError: Error resolving module '/var/task/@aws-sdk/client-eventbridge'

Hello,

Lambda config:

  Function:
    MemorySize: 256
    Architectures: ["arm64"]
    Runtime: provided.al2
    Layers:
      - !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:layer:LLRT:1

Deployed with

    Metadata:
      BuildMethod: esbuild
      BuildProperties:
        External:
          - '@aws-sdk/client-eventbridge'
          - '@aws-sdk/client-dynamodb'
          - '@aws-sdk/util-dynamodb'
        Minify: true
        Target: "es2020"
        Sourcemap: false
        EntryPoints: 
          - list.ts

Packages:

  "dependencies": {
    "@aws-sdk/client-dynamodb": "^3.433.0",
    "@aws-sdk/util-dynamodb": "^3.433.0",
    "@aws-sdk/client-eventbridge": "^3.433.0",
    "uuid": "^9.0.1"
  }

Log:

INIT_START Runtime Version: provided:al2.v25 Runtime Version ARN: arn:aws:lambda:eu-central-1::runtime:dce29199fb5887a2c4fceaa2f34d395ba43a74a6895b381cb9383b1c7f3b5875
Binary launched
Decompressing using 2 threads
Extraction time: 13.7280 ms
Extraction + write time: 18.8690 ms
Starting app
ReferenceError: Error resolving module '/var/task/@aws-sdk/client-eventbridge' from '' at <anonymous> (/var/task/insertToSync.js:1:614)
START RequestId: 234d277d-adc9-58b0-a831-7acab3e3ad38 Version: $LATEST
Unknown application error occurred Runtime.Unknown
END RequestId: 234d277d-adc9-58b0-a831-7acab3e3ad38
REPORT RequestId: 234d277d-adc9-58b0-a831-7acab3e3ad38 Duration: 362.75 ms Billed Duration: 363 ms Memory Size: 128 MB Max Memory Used: 12 MB

AES encryption

there currently seems to be no way to do AES en/decryption, which is essential for many applications

require('crypto') doesn't include createCipheriv/createDecipheriv:

and SubtleCrypto doesn't seem to be implemented either

TLS: ReferenceError: Error resolving module '/var/task/tls

Hello,

as suspected, TLS is the one that blocks the first usage of this LLRT

Lambda config:

  Function:
    MemorySize: 256
    Architectures: ["arm64"]
    Runtime: provided.al2
    Layers:
      - !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:layer:LLRT:1

Deployed with

    Metadata:
      BuildMethod: esbuild
      BuildProperties:
        External:
          - '@aws-sdk/client-dynamodb'
          - '@aws-sdk/util-dynamodb'
          - '@aws-sdk/client-sqs'
          - '@aws-sdk/node-http-handler'
          - '@aws-sdk/client-sqs'
        Minify: true
        Target: "es2020"
        Sourcemap: false
        EntryPoints: 
          - list.ts

Packages:

  "dependencies": {
    "@aws-sdk/client-dynamodb": "^3.433.0",
    "@aws-sdk/util-dynamodb": "^3.433.0",
    "@aws-sdk/client-sqs": "^3.433.0",
    "@gomomento/sdk": "^1.45.0",
    "uuid": "^9.0.1"
  },

Log:

INIT_START Runtime Version: provided:al2.v25 Runtime Version ARN: arn:aws:lambda:eu-central-1::runtime:dce29199fb5887a2c4fceaa2f34d395ba43a74a6895b381cb9383b1c7f3b5875
Binary launched
Decompressing using 2 threads
Extraction time: 13.0210 ms
Extraction + write time: 17.8990 ms
Starting app
Error resolving module '/var/task/tls' from '/var/task/list.js'
START RequestId: d8cd0ae3-357a-4cbf-83c1-6351390603e8 Version: $LATEST
Unknown application error occurred Runtime.Unknown
END RequestId: d8cd0ae3-357a-4cbf-83c1-6351390603e8
REPORT RequestId: d8cd0ae3-357a-4cbf-83c1-6351390603e8 Duration: 1063.00 ms Billed Duration: 1063 ms Memory Size: 256 MB Max Memory Used: 28 MB

`signal` property missing from `Request` object

Forgive me as I was unsure if Request was covered by fetch in the compatibility matrix or not, but I noticed when creating a new Request object with an AbortSignal that the same AbortSignal is not then accessible from the signal property of the newly-created object

// index.js
const controller = new AbortController();
const req = new Request('http://localhost/', { signal: controller.signal });

console.log(req.signal);
node index.js
# AbortSignal { aborted: false }
./llrt index.js
# undefined

Static properties throws syntax error

There appears to be an issue parsing static properties on a Class.

Error:

{
  "errorType": "SyntaxError",
  "errorMessage": "invalid property name",
  "stackTrace": [
    "    at /var/task/dist/server/entry.mjs:1878:13",
    ""
  ]
}

Entry File (entry.mjs)

1868: var init_astro_msqbqiov = __esm({
1869:   "dist/server/chunks/astro_msqbqiov.mjs"() {
1870:     "use strict";
1871:     init_colors();
1872:     init_clsx();
1873:     import_cssesc = __toESM(require_cssesc(), 1);
1874:     init_esm();
1875:     __name(normalizeLF, "normalizeLF");
1876:     __name(codeFrame, "codeFrame");
1877:     AstroError = class extends Error {
1878:       static {
1879:         __name(this, "AstroError");
1880:       }
1881:       loc;
1882:       title;
1883:       hint;
1884:       frame;
1885:       type = "AstroError";
1886:       constructor(props, options) {
1887:         const { name, title, message, stack, location, hint, frame: frame2 } = props;
1888:         super(message, options);
1889:         this.title = title;
1890:         this.name = name;
1891:         if (message)
1892:           this.message = message;
1893:         this.stack = stack ? stack : this.stack;
1894:         this.loc = location;
1895:         this.hint = hint;
1896:         this.frame = frame2;
1897:       }

Profile-Guided Optimization (PGO) benchmarks

Hi!

I tried to apply Profile-Guided Optimization (PGO) to optimize llrt performance further (as I already did for many other projects - see all current results here). I performed some basic benchmarks and want to share the results here.

Test environment

  • Fedora 39
  • Linux kernel 6.7.3
  • AMD Ryzen 9 5900x
  • 48 Gib RAM
  • SSD Samsung 980 Pro 2 Tib
  • Compiler - Rustc 1.76
  • llrt version: the latest for now from the main branch on commit c040bfd05a2be8d3300e7a1bbfc9405c42a865fa
  • Disabled Turbo boost (for more stable results across benchmark runs)

Benchmark

As a benchmark, I use the same command as I found in the Makefile - llrt fixtures/hello.js. The same scenario is used for the PGO training phase. All PGO optimization steps are done with cargo-pgo tool. PGO instrumented version is built with cargo pgo build, PGO optimized version - cargo pgo optimize build. taskset -c 0 is used for reducing CPU scheduling influence on the results.

Results

I got the following results:

hyperfine -u microsecond -N --warmup=2000 --min-runs 10000 "taskset -c 0 ./llrt_optimized ../fixtures/hello.js" "taskset -c 0 ./llrt_release ../fixtures/hello.js"
Benchmark 1: taskset -c 0 ./llrt_optimized ../fixtures/hello.js
  Time (mean ± σ):     2664.8 µs ±  78.8 µs    [User: 590.1 µs, System: 1943.3 µs]
  Range (min … max):   2478.1 µs … 4486.1 µs    10000 runs

  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Benchmark 2: taskset -c 0 ./llrt_release ../fixtures/hello.js
  Time (mean ± σ):     2796.1 µs ±  63.6 µs    [User: 601.4 µs, System: 2068.9 µs]
  Range (min … max):   2647.5 µs … 4495.0 µs    10000 runs

  Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.

Summary
  taskset -c 0 ./llrt_optimized ../fixtures/hello.js ran
    1.05 ± 0.04 times faster than taskset -c 0 ./llrt_release ../fixtures/hello.js

, where llrt_release - usual Release version, llrt_optimized - PGO-optimized version.

I ran the benchmark multiple times, with different command orders, etc - in all cases, the PGO-optimized version was faster than the usual release version. However, it would be awesome to perform some more precise benchmarks.

Further steps

I can suggest to do the following things:

  • Perform more PGO benchmarks with some more precise performance measurements.
  • If PGO is worth it - add a note to the documentation about it and, possibly, make an option in the build scripts to optimize llrt easier with the existing build infrastructure.
  • Try to play with Post-Link Optimization (PLO) with tools like LLVM BOLT.

I hope these benchmark results can be interesting to someone.

Fail to resolve module index

Hey!
I tried to run the test with make run but it fails with Error resolving module 'xx/lambda-llrt/index' from '@llrt/runtime'
Tracking down the error, it comes from https://github.com/awslabs/llrt/blob/main/src/js/%40llrt/runtime.ts#L214

I had to manually hardcode the .mjs extension in src/js/@llrt/runtime.ts so the line becomes

const handlerModule = await import(`${taskRoot}/${moduleName}.mjs`);

Does it work on your side without it? Which version of node are you using?

Upgrade from CLI

Create an upgrade action via CLI.
llrt upgrade

This could query the Github repo (via the API) and pull down latest release according to the users OS.

Coverage recording

Is there a way to save code coverage details when running tests?

Under NodeJS the env var NODE_V8_COVERAGE allows to save the V8 coverage, Bun, and Deno both have a --coverage flag.

Would be great to have this also in LLRT. That said, I don't see any support in QuickJS, and instrumenting code (a-la NYC/Istanbul) can be a bit of a hassle...

callback function is 'not a function'?

Hello,
I encountered an error when trying to use a callback function in AWS Lambda Authorizer.

Here is my implementation.
The authentication process has been excluded.

export const handler = async (event, context, callback) => {

  var authResponse = buildAllowAllPolicy(event, 'foo');
  callback(null, authResponse);
};

function buildAllowAllPolicy (event, principalId) {
  const policy = {
    principalId: principalId,
    policyDocument: {
      Version: '2012-10-17',
      Statement: [
        {
          Action: 'execute-api:Invoke',
          Effect: 'Allow',
          Resource: event.methodArn
        }
      ]
    }
  };
  return policy;
}
START RequestId: e50ca89a-72c6-436f-9534-9b7d9fe2cf99 Version: $LATEST
2024-02-19T13:34:52.844Z	e50ca89a-72c6-436f-9534-9b7d9fe2cf99	ERROR	    at handler (/var/task/index.mjs:4:3)
at startProcessEvents (@llrt/runtime:5:187)
2024-02-19T13:34:52.844Z	e50ca89a-72c6-436f-9534-9b7d9fe2cf99	ERROR	{
  errorType: 'TypeError',
  errorMessage: 'not a function',
  stackTrace: [ '    at handler (/var/task/index.mjs:4:3)', '    at startProcessEvents (@llrt/runtime:5:187)', '' ],
  requestId: 'e50ca89a-72c6-436f-9534-9b7d9fe2cf99'
}
END RequestId: e50ca89a-72c6-436f-9534-9b7d9fe2cf99
REPORT RequestId: e50ca89a-72c6-436f-9534-9b7d9fe2cf99	Duration: 1.71 ms	Billed Duration: 35 ms	Memory Size: 128 MB	Max Memory Used: 21 MB	Init Duration: 32.70 ms	

Do you know what could be the cause?

S3 GetObjectCommand is throwing an exception when used (Unexpected stream implementation, expect Blob or ReadableStream, got Uint8Array)

Description

When you try to download an object from S3, the S3Client throws an exception instead :


2024-02-13T02:05:55.689Z	fc15815e-245d-4d51-8633-24deabb3699e	ERROR	    at Fr (llrt-chunk-sdk-GKKMFRJI.js:6:131)
--
at _g (@aws-sdk/client-s3:1:55290)
at <anonymous> (llrt-chunk-sdk-SXB3ZJRL.js:1:14739)
2024-02-13T02:05:55.689Z	fc15815e-245d-4d51-8633-24deabb3699e	ERROR	{  errorType: 'Error',  errorMessage: 'Unexpected stream implementation, expect Blob or ReadableStream, got Uint8Array
Deserialization error: to see the raw response, inspect the hidden field {error}.$response on this object.',  stackTrace: [ '    at Fr (llrt-chunk-sdk-GKKMFRJI.js:6:131)', '    at _g (@aws-sdk/client-s3:1:55290)', '    at <anonymous> (llrt-chunk-sdk-SXB3ZJRL.js:1:14739)', '' ],  requestId: 'fc15815e-245d-4d51-8633-24deabb3699e'}

To reproduce

  1. Deploy the stack in llrt/example/infrastructure using CDK
  2. Access the URL : https://API_GW/llrt-s3
  3. Replace the PutObjectCommand in : llrt/example/functions/src/v3-s3.mjs with a GetObjectCommand like below
    s3Client.send(
      new GetObjectCommand({
        Bucket: process.env.BUCKET_NAME,
        Key: "<KEY_OF_FILE_UPLOADED_AT_STEP_2>",
      })
    ),
  1. Bug you will see the exception above.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.