Giter VIP home page Giter VIP logo

onnxjs's Introduction

npm version GitHub version ONNX.js CI - Windows CPU (Electron) ONNX.js CI - Windows CPU (Node.js) ONNX.js CI - Windows GPU (Chrome,Edge) ONNX.js CI - Linux CPU (Node.js) ONNX.js CI - BrowserStack (Suite0)

ONNX.js has been replaced by ONNX Runtime Web which offers enhanced user experience and improved performance. Please visit the following links to get more information:

ONNX.js

ONNX.js is a Javascript library for running ONNX models on browsers and on Node.js.

ONNX.js has adopted WebAssembly and WebGL technologies for providing an optimized ONNX model inference runtime for both CPUs and GPUs.

Why ONNX models

The Open Neural Network Exchange (ONNX) is an open standard for representing machine learning models. The biggest advantage of ONNX is that it allows interoperability across different open source AI frameworks, which itself offers more flexibility for AI frameworks adoption. See Getting ONNX Models.

Why ONNX.js

With ONNX.js, web developers can score pre-trained ONNX models directly on browsers with various benefits of reducing server-client communication and protecting user privacy, as well as offering install-free and cross-platform in-browser ML experience.

ONNX.js can run on both CPU and GPU. For running on CPU, WebAssembly is adopted to execute the model at near-native speed. Furthermore, ONNX.js utilizes Web Workers to provide a "multi-threaded" environment to parallelize data processing. Empirical evaluation shows very promising performance gains on CPU by taking full advantage of WebAssembly and Web Workers. For running on GPUs, a popular standard for accessing GPU capabilities - WebGL is adopted. ONNX.js has further adopted several novel optimization techniques for reducing data transfer between CPU and GPU, as well as some techniques to reduce GPU processing cycles to further push the performance to the maximum.

See Compatibility and Operators Supported for a list of platforms and operators ONNX.js currently supports.

Benchmarks

Benchmarks have been run against the most prominent open source solutions in the same market. Below are the results collected for Chrome and Edge browsers on one sample machine (computations run on both CPU and GPU):

alt text

NOTE:

  1. Keras.js doesn't support WebGL usage on Edge
  2. Keras.js and TensorFlow.js don't support WebAssembly usage on any browser

The specs of the machine that was used to perform the benchmarking is listed below:

  • OS: Microsoft Windows 10 Enterprise Insider Preview
  • Model: HP Z240 Tower Workstation
  • Processor: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz, 3401 Mhz, 4 Core(s), 8 Logical Processor(s)
  • Installed Physical Memory (RAM): 32.0 GB
  • GPU make / Chip type: AMD FirePro W2100 / AMD FirePro SDI (0x6608)
  • GPU Memory (approx.): 18.0 GB

Demo

ONNX.js demo website shows the capabilities of ONNX.js. Check the code.

Getting Started

There are multiple ways to use ONNX.js in a project:

Using <script> tag

This is the most straightforward way to use ONNX.js. The following HTML example shows how to use it:

<html>
  <head> </head>

  <body>
    <!-- Load ONNX.js -->
    <script src="https://cdn.jsdelivr.net/npm/onnxjs/dist/onnx.min.js"></script>
    <!-- Code that consume ONNX.js -->
    <script>
      // create a session
      const myOnnxSession = new onnx.InferenceSession();
      // load the ONNX model file
      myOnnxSession.loadModel("./my-model.onnx").then(() => {
        // generate model input
        const inferenceInputs = getInputs();
        // execute the model
        myOnnxSession.run(inferenceInputs).then((output) => {
          // consume the output
          const outputTensor = output.values().next().value;
          console.log(`model output tensor: ${outputTensor.data}.`);
        });
      });
    </script>
  </body>
</html>

Refer to browser/Add for an example.

Using NPM and bundling tools

Modern browser based applications are usually built by frameworks like Angular, React, Vue.js and so on. This solution usually builds the source code into one or more bundle file(s). The following TypeScript example shows how to use ONNX.js in an async context:

  1. Import Tensor and InferenceSession.
import { Tensor, InferenceSession } from "onnxjs";
  1. Create an instance of InferenceSession.
const session = new InferenceSession();
  1. Load the ONNX.js model
// use the following in an async method
const url = "./data/models/resnet/model.onnx";
await session.loadModel(url);
  1. Create your input Tensor(s) similar to the example below. You need to do any pre-processing required by your model at this stage. For that refer to the documentation of the model you have:
// creating an array of input Tensors is the easiest way. For other options see the API documentation
const inputs = [
  new Tensor(new Float32Array([1.0, 2.0, 3.0, 4.0]), "float32", [2, 2]),
];
  1. Run the model with the input Tensors. The output Tensor(s) are available once the run operation is complete:
// run this in an async method:
const outputMap = await session.run(inputs);
const outputTensor = outputMap.values().next().value;

More verbose examples on how to use ONNX.js are located under the examples folder. For further info see Examples

Running in Node.js

ONNX.js can run in Node.js as well. This is usually for testing purpose. Use the require() function to load ONNX.js:

require("onnxjs");

You can also use NPM package onnxjs-node, which offers a Node.js binding of ONNXRuntime.

require("onnxjs-node");

See usage of onnxjs-node.

Refer to node/Add for a detailed example.

Documents

Developers

For information on ONNX.js development, please check Development

For API reference, please check API.

Getting ONNX models

You can get ONNX models easily in multiple ways:

Learn more about ONNX

Compatibility

Desktop Platforms

OS/Browser Chrome Edge FireFox Safari Opera Electron Node.js
Windows 10 ✔️ ✔️ ✔️ - ✔️ ✔️ ✔️
macOS ✔️ - ✔️ ✔️ ✔️ ✔️ ✔️
Ubuntu LTS 18.04 ✔️ - ✔️ - ✔️ ✔️ ✔️

Mobile Platforms

OS/Browser Chrome Edge FireFox Safari Opera
iOS ✔️ ✔️ ✔️ ✔️ ✔️
Android ✔️ ✔️ Coming soon - ✔️

Operators

ONNX.js currently supports most operators in ai.onnx operator set v7 (opset v7). See operators.md for a complete, detailed list of which ONNX operators are supported by the 3 available builtin backends (cpu, wasm, and webgl).

Support for ai.onnx.ml operators is coming soon. operators-ml.md has the most recent status of ai.onnx.ml operators.

Contribute

We’d love to embrace your contribution to ONNX.js. Please refer to CONTRIBUTING.md.

Thanks

Thanks to BrowserStack for providing cross browser testing support.

License

Copyright (c) Microsoft Corporation. All rights reserved.

Licensed under the MIT License.

onnxjs's People

Contributors

28smiles avatar darrelfrancis avatar dependabot[bot] avatar duli2012 avatar elliotwaite avatar emmaningms avatar emxuyu avatar fs-eire avatar hanbitmyths avatar hariharans29 avatar hodovani avatar jeffsaremi avatar liuziyue avatar maxee998 avatar narsil avatar niels-garve avatar ntt123 avatar rishav1 avatar xzhu1900 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnxjs's Issues

Conv2d and Pad 'Reflection' on Android `webgl` backend outputs are incorrect and inconsistent between devices

On Google Pixel 3, Pad 'Reflection' with webgl backend outputs zeros.
On Huawei P20 Pro, Pad 'Reflection' is correct, but Conv2D output is not.

Is there a requirement for webgl on Android devices?

I've created a quick test page at ONNX Op test page (https://gnsmrky.github.io/pytorch-fast-neural-style-onnxjs/test_ops.html). Go-to the page on Android phones to do a quick test.

Pls have a quick look at the above issues. I am trying to find a combination that works across different Android devices using webgl backend, especially for Conv2d.

TypeError: unrecognized operator 'ATen'

When trying to use a unet model exported from pytorch 1.0 I get the following error:
TypeError: unrecognized operator 'ATen'
at createOperator (/home/alexis/devs/javainference/node_modules/onnxjs/lib/backends/cpu/ops-resolve.js:157:19)
at Object.resolve (/home/alexis/devs/javainference/node_modules/onnxjs/lib/backends/cpu/ops-resolve.js:35:14)
at WasmSessionHandler.createOperator (/home/alexis/devs/javainference/node_modules/onnxjs/lib/backends/wasm/session-handler.js:51:42)
at WasmSessionHandler.resolve (/home/alexis/devs/javainference/node_modules/onnxjs/lib/backends/wasm/session-handler.js:24:23)
at Session.initializeOps (/home/alexis/devs/javainference/node_modules/onnxjs/lib/session.js:261:48)
at /home/alexis/devs/javainference/node_modules/onnxjs/lib/session.js:125:19
at Profiler.event (/home/alexis/devs/javainference/node_modules/onnxjs/lib/instrument.js:200:25)
at Session.initialize (/home/alexis/devs/javainference/node_modules/onnxjs/lib/session.js:116:23)
at Session. (/home/alexis/devs/javainference/node_modules/onnxjs/lib/session.js:79:46)
at step (/home/alexis/devs/javainference/node_modules/onnxjs/lib/session.js:34:23)

I guess this is because the onnx.js library doesn't yet support this operator.

synchronous inference

is there any way to run the inference synchronously? this way it doesn't require lots of changes for use cases where a busy loop is waiting for the inference result in a blocking way.

feature request: LRN operator

LRN operator is widely used in many neural network like imagenet. WebGL does not support LRN operation. It was great if it is supported by WebGL. there is a huge perfomance gap between WebGL and CPU backend.

ReduceMin and ReduceMax operations when coming from pytorch do not work

🐛 Bug

Calling model with onnx::ReduceMin or onnx::ReduceMax results:
Error: Uniform A not found.

To Reproduce

  1. Run pytorch (nightly) code:
    NOTE installed nightly, because stable version does not support these operations properly:
    conda install pytorch-nightly cudatoolkit=10.0 -c pytorch
import torch.nn as nn
import torch.onnx

dummy_input = torch.zeros(1, 1, 1, 1)

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()

    def forward(self, inputs):
        return inputs.min()
        # return inputs.max()

model = Model()

torch.onnx.export(model, dummy_input, '/tmp/tmp.onnx', verbose=True)
  1. Generated graph for min
graph(%0 : Float(1, 1, 1, 1)):
  %1 : Float() = onnx::ReduceMin[keepdims=0](%0), scope: Model
  return (%1)

graph torch-jit-export (
  %0[FLOAT, 1x1x1x1]
) {
  %1 = ReduceMin[keepdims = 0](%0)
  return %1
}
  1. Generated graph for max
graph(%0 : Float(1, 1, 1, 1)):
  %1 : Float() = onnx::ReduceMax[keepdims=0](%0), scope: Model
  return (%1)

graph torch-jit-export (
  %0[FLOAT, 1x1x1x1]
) {
  %1 = ReduceMax[keepdims = 0](%0)
  return %1
}

Environment

onnxjs version: 0.1.5
Firefox Quantum version: 64
Chromium version: 71.0.3578.98 (openSUSE Build) (64-bit)

PyTorch version: 1.0.0.dev20190404
Is debug build: No
CUDA used to build PyTorch: 10.0.130

OS: openSUSE Tumbleweed
GCC version: (GCC) 5.3.0
CMake version: version 3.12.2

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration: GPU 0: GeForce GTX 980 Ti
Nvidia driver version: 410.93
cuDNN version: Could not collect

Versions of relevant libraries:
[pip] numpy==1.15.1
[pip] numpydoc==0.8.0
[pip] torch==1.0.0
[conda] blas                      1.0                         mkl  
[conda] cuda100                   1.0                           0    pytorch
[conda] cuda90                    1.0                  h6433d27_0    pytorch
[conda] cuda92                    1.0                           0    pytorch
[conda] magma-cuda90              2.2.0                hae16b58_1    soumith
[conda] mkl                       2019.1                      144  
[conda] mkl-service               1.1.2            py36he904b0f_5  
[conda] mkl_fft                   1.0.10           py36ha843d7b_0  
[conda] mkl_random                1.0.2            py36hd81dba3_0  
[conda] pytorch                   1.0.0           py3.6_cuda10.0.130_cudnn7.4.1_1  [cuda100]  pytorch
[conda] pytorch-nightly           1.0.0.dev20190404 py3.6_cuda10.0.130_cudnn7.4.2_0    pytorch

TypeError: unrecognized operator 'Pad'

Hi, I got a problem while loading model using onnx.js. Is there any way to solve this error?
Also, I got these two at the same time
w WebGLBackend 2019-04-19T09:24:13.803Z|Unable to initialize WebGLBackend. ReferenceError: document is not defined
e WebAssembly-Workers 2019-04-19T09:24:13.808Z|Environment does not support usage of Workers. Will not spawn workers.

I ran two lines of code in the node v8.10.0
const session = new onnx.InferenceSession();
session.loadModel('model.onnx')

the onnx file is here.

BTW, I can load it using onnxruntime.

Source data too small. Allocating larger array instrument.ts:82

I created a model for image classification using pytorch and transformed it to onnx.
Trying a random value input [1,3,224,224] to it on background 'webgl', it seems the output is almost same, regardless of the value of the input.
Console said "Souce data too small. Allocating larger array instrument.ts:82". I'm not sure how and where to fix. I think there is something wrong with the model.
My model is at https://github.com/fumiya0/patimg-classification-test/TCGA3.onnx
and you can click "predict" at https://fumiya0.github.io//patimg-classification-test/ to see what happens when the model deals with an input. Sorry for a messy code!
When I try the model res50_8.onnx, which I downloaded at onnx demo repogitory, there is no warning message and it seems to work well.
Thanks!

BatchNorm when coming from pytorch does not work.

Simple Pytorch script to showcase the failure

#!/usr/bin/env python                                                            
import torch                                                                     
from torch import nn                                                             
                                                                                 
                                                                                 
def main():                                                                      
    model = nn.Sequential(nn.Conv2d(3, 8, kernel_size=3), nn.BatchNorm2d(8))     
    model.eval()                                                                 
                                                                                 
    image = torch.rand(1, 3, 224, 224)                                           
    torch.onnx.export(model, image, "out.onnx", verbose=True)                    
                                                                                 
                                                                                 
if __name__ == "__main__":                                                       
    main() 

When I try to load from the default script I get
Scaler tensor is not implemented yet.
Backtracking the issue, I think it is linked to Pytorch BatchNorm.num_batches_tracked tensor, which is a scalar.

By the way, why are scalars not supported yet ? Don't they behave as tensor of shape=[1] ? (broadcasting issues aside)

Invalid inputs detected for unnamed_Conv_0

I'm trying to use an audio CNN from pytorch in onnxjs, but I think I'm having some problems with the Conv1d layers. I got no errors with onnx.checker.check_model(model) in python and it seems to load fine into an InferenceSession with await session.loadModel(...) in node. However, when I provide the same size and type input as was used for the trace, I get this error:

Error: invalid inputs detected; op: unnamed_Conv_0 at ExecutionPlan.<anonymous> (C:\Users\...\my_onnx\node_modules\onnxjs\lib\execution-plan.js:166:63)

The first conv layer in the model corresponds to this layer in ONNX:

%13 : Float(1, 32, 11024) = onnx::Conv[dilations=[1], group=1, kernel_shape=[7], pads=[0, 0], strides=[4]](%input, %layer_0, %layer_1), scope: ConvEmbedding2/Conv1d[conv1]

When I remove the conv layers and just use nn.Linear and ReLU activations, I have no problems--which suggested to me that I actually made the input tensor correctly but still the conv layers are problematic.

[linux] npm ci fails on ubuntu 16.04.5, node.js v10.13.0

npm ci fails against commit 6aa8898

Dev environment is ubuntu 16.04.5, node.js v10.13.0

The error messages are:

> [email protected] prepare /home/nhu/code/onnxjs
> tsc && node tools/build

test/test-runner.ts:144:47 - error TS2345: Argument of type 'NamedTensor[]' is not assignable to parameter of type 'Tensor[] | Map<string, Tensor>'.
  Type 'NamedTensor[]' is not assignable to type 'Tensor[]'.
    Type 'NamedTensor' is not assignable to type 'Tensor'.
      Property 'data' is missing in type 'NamedTensor'.

144     const outputs = await context.session.run(testCase.inputs!);
                                                  ~~~~~~~~~~~~~~~~

test/test-runner.ts:150:57 - error TS2339: Property 'type' does not exist on type 'NamedTensor'.

150       Logger.verbose('TestRunner', `   '${i.name}': ${i.type}[${i.dims.join(',')}]`);
                                                            ~~~~

test/test-runner.ts:150:67 - error TS2339: Property 'dims' does not exist on type 'NamedTensor'.

150       Logger.verbose('TestRunner', `   '${i.name}': ${i.type}[${i.dims.join(',')}]`);
                                                                      ~~~~

test/test-runner.ts:153:22 - error TS7006: Parameter 't' implicitly has an 'any' type.

153     outputs.forEach((t, name) => {
                         ~

test/test-runner.ts:153:25 - error TS7006: Parameter 'name' implicitly has an 'any' type.

153     outputs.forEach((t, name) => {
                            ~~~~

test/test-runner.ts:284:68 - error TS2345: Argument of type 'NamedTensor[]' is not assignable to parameter of type 'Tensor[]'.

284     this.checkTensorResult(expected.map(i => actual.get(i.name)!), expected);
                                                                       ~~~~~~~~

test/test-types.ts:6:22 - error TS2307: Cannot find module '../lib/Tensor'.

6 import {Tensor} from '../lib/Tensor';
                       ~~~~~~~~~~~~~~~

Unnatural node indexes

https://github.com/Microsoft/onnxjs/blob/c2f3de6d934e807ec7880ea3e9438ce519d1b8f6/lib/graph.ts#L221-L233

These above lines add data buffer for the output nodes which is unnecessary because after that at
https://github.com/Microsoft/onnxjs/blob/c2f3de6d934e807ec7880ea3e9438ce519d1b8f6/lib/graph.ts#L235-L271
these lines already take care of this task for all nodes in the graph.

Current implementation causes a confusion because the nodes have different indexes as compared to indexes in onnx graph. My suggestion is to put lines L221-L233 after scanning all the nodes to have natural indexes.

TypeError: unrecognized operator 'Elu'

When trying to load up my model I get

TypeError: unrecognized operator 'Elu'

I exported my model via pytorch. Does this mean I can't use Elu with onnxjs?

I use the tutorial code for loading

const { Tensor, InferenceSession } = require('onnxjs')

const main = async () => {
  const session = new InferenceSession()

  const url = './model-1.onnx'
  await session.loadModel(url)
}

main()

Best,

Error loading onnxjs from webpack

I've been trying to add onnxjs support in VoTT, but I've been hitting some issues. I have a model that I was able to successfully run in node.js, but when I try to run it on the frontend, I get

TypeError: cannot resolve operator 'Shape' with opsets: ai.onnx v9

I tried converting the model to use a lower opset, but I faced similar errors with the converted models.

So I thought that I'd be clever and try a workaround and run the model on the backend using onnxjs-node, sending the results back to the frontend using IPC.

After adding a little code into src/electron/main.ts

import { Tensor, InferenceSession } from "onnxjs-node";
const inferenceSession = new InferenceSession();

and updating my webpack config, I'm seeing a couple of new errors.

 σ csaroff:~/fiddle/VoTT$ node_modules/.bin/webpack-cli --config config/webpack.dev.js
...
WARNING in ./node_modules/onnxjs-node/bin/napi-v3 sync ^\.\/.*\.node$
Module not found: Error: Can't resolve 'node-loader' in '/Users/csaroff/fiddle/VoTT'
 @ ./node_modules/onnxjs-node/bin/napi-v3 sync ^\.\/.*\.node$
 @ ./node_modules/onnxjs-node/lib/binding.js
 @ ./node_modules/onnxjs-node/lib/inference-session-override.js
 @ ./node_modules/onnxjs-node/lib/index.js
 @ ./src/electron/main.ts

ERROR in ./node_modules/onnxjs/lib/wasm-binding.js
Module not found: Error: Can't resolve 'worker-loader' in '/Users/csaroff/fiddle/VoTT/node_modules/onnxjs/lib'
 @ ./node_modules/onnxjs/lib/wasm-binding.js 101:29-96
 @ ./node_modules/onnxjs/lib/backends/backend-wasm.js
 @ ./node_modules/onnxjs/lib/api/onnx-impl.js
 @ ./node_modules/onnxjs/lib/api/index.js
 @ ./node_modules/onnxjs-node/lib/index.js
 @ ./src/electron/main.ts

So then I ran npm install --save-dev node-loader worker-loader and tried rerunning webpack.

σ csaroff:~/fiddle/VoTT$node_modules/.bin/webpack-cli --config config/webpack.dev.js
...
ERROR in ./node_modules/onnxjs/lib/wasm-binding.js
Module not found: Error: Can't resolve './worker/worker-main' in '/Users/csaroff/fiddle/VoTT/node_modules/onnxjs/lib'
 @ ./node_modules/onnxjs/lib/wasm-binding.js 101:29-96
 @ ./node_modules/onnxjs/lib/backends/backend-wasm.js
 @ ./node_modules/onnxjs/lib/api/onnx-impl.js
 @ ./node_modules/onnxjs/lib/api/index.js
 @ ./node_modules/onnxjs-node/lib/index.js
 @ ./src/electron/main.ts

And this is where I'm stuck. I see that you chose not to include lib/worker in your tsconfig file, but I couldn't figure out why. So I tried rebuilding the project, removing the "exclude": ["lib/worker"], line from the tsconfig.json, but that just gives me a whole mess of typescript errors.

σ csaroff:~/fiddle/onnxjs$npm run build

> [email protected] build /Users/csaroff/fiddle/onnxjs
> tsc && node tools/build --build-wasm --build-bundle

node_modules/typescript/lib/lib.dom.d.ts:25:1 - error TS6200: Definitions of the following identifiers conflict with those in another file: EventListenerOrEventListenerObject, ImportExportKind, TableKind, BlobPart, HeadersInit, BodyInit, RequestInfo, DOMHighResTimeStamp, CanvasImageSource, OffscreenRenderingContext, MessageEventSource, ImageBitmapSource, TimerHandler, PerformanceEntryList, VibratePattern, AlgorithmIdentifier, HashAlgorithmIdentifier, BigInteger, NamedCurve, GLenum, GLboolean, GLbitfield, GLint, GLsizei, GLintptr, GLsizeiptr, GLuint, GLfloat, GLclampf, TexImageSource, Float32List, Int32List, BufferSource, DOMTimeStamp, FormDataEntryValue, IDBValidKey, Transferable, BinaryType, CanvasDirection, CanvasFillRule, CanvasLineCap, CanvasLineJoin, CanvasTextAlign, CanvasTextBaseline, ClientTypes, EndingType, IDBCursorDirection, IDBRequestReadyState, IDBTransactionMode, ImageSmoothingQuality, KeyFormat, KeyType, KeyUsage, NotificationDirection, NotificationPermission, OffscreenRenderingContextId, PermissionName, PermissionState, PushEncryptionKeyName, PushPermissionState, ReferrerPolicy, RequestCache, RequestCredentials, RequestDestination, RequestMode, RequestRedirect, ResponseType, ServiceWorkerState, ServiceWorkerUpdateViaCache, VisibilityState, WebGLPowerPreference, WorkerType, XMLHttpRequestResponseType

25 interface Account {
   ~~~~~~~~~

  node_modules/typescript/lib/lib.webworker.d.ts:25:1
    25 interface AddEventListenerOptions extends EventListenerOptions {
       ~~~~~~~~~
    Conflicts are in this file.
...

I'm pretty much out of ideas. I checked in a minimally reproducible example on this branch. Here's a diff of the changes. Any help/insight here would be great!

BatchNormalization: non-spatial is not supported

Hi,

I'm new to ML/JS so forgive me if I have completely missed something. I was faced with "Error: "invalid inputs detected; op: fc1" when trying to warmUp the ArcFace/resnet100 model. Upon investigation with lutzroeder's ml model visualizer netron, I found that fc1 is the final output. I'm not too sure what I have done wrong, please point me in the right direction? :]

Note: runModel is from onnxjs-demo

import React from 'react';
import * as runModeUtils from './utils/runModel';
import { InferenceSession, Tensor } from 'onnxjs';

const MODEL_FILEPATH = './resnet100.onnx';
const IMAGE_SIZE = 112;

export default class Face extends React.Component {
  constructor( props ) {
    super( props );

    let sess = new InferenceSession({ backendHint: 'webgl' });
    this.state = {
      sess: sess
    };
  }

  componentDidMount( ) {
    this.warmUp( );
  }

  warmUp = async () => {
    await this.state.sess.loadModel( MODEL_FILEPATH );
    let res = await runModeUtils.warmupModel( this.state.sess, [1, 3, IMAGE_SIZE, IMAGE_SIZE] );
    console.log( res );
  }

  render() {
    return <><p>Hello</p></>;
  }
}

feature request: TransposeConv operator

Just to register interest for the addition of the TransposeConv operator. This is widely used operator for upsampling tasks like pose estimation, convolutional auto-encoders, etc

Thanks

Using WASM SIMD, multithreading

Great to see this!

Are there any plans to wasm post mvp features like simd and multi threading instead of service workers for CPU based parallelism?

Error on WASM backend enlargeMemory

When I try to run a model that runs fine on CPU on WASM I get

LinkError: "import object field 'enlargeMemory' is not a Function"

When I use gpu (where not all ops are supported) I see that the build enlarges memory progressively until it can fit my model (I guess).

I tried using -s ASSERTIONS=1 in the EMCC build line but the error remains cryptic to me. Is it that we did not export enlargeMemory from emcc? Is it that enlargeMemory has been renamed upstream (emscripten-core/emscripten#8230) ?

I'll continue investigating but dropping an issue as someone might have hints about what is happening there.

Uncaught (in promise) Error: multiple nodes output to one data value

When I load a pytorch export onnx model, something wrong happend like this, what's wrong with my operations? I have checked the exported onnx model with onnx.checker. Thanks!
graph.ts:298 Uncaught (in promise) Error: multiple nodes output to one data value: 19
at t.buildGraph (graph.ts:298)
at new t (graph.ts:139)
at Object.from (graph.ts:77)
at t.load (model.ts:29)
at session.ts:85
at t.event (instrument.ts:294)
at e.initialize (session.ts:81)
at e. (session.ts:63)
at onnx.ts:87
at Object.next (onnx.ts:87)

iOS support

I know you are aware of this issue but you can't realistically use this library in production until this is solved.

Awesome library, by the way :)

yolo demo on mobile

yolo demo on mobile doesn't work
device: samsung note 8 with latest chrome

Error: invalid wire type 4 at offset 3

I am trying to develop react app that does object detection using onnx. I am using squeezenet example in onnx-demo project. On running model console says

Error: invalid wire type 4 at offset 3
    at u.skipType (onnx.min.js:5941)
    at Function.t.decode (onnx.min.js:2669)
    at Function.t.decode (onnx.min.js:2444)
    at t.load (onnx.min.js:17092)
    at onnx.min.js:16164
    at t.event (onnx.min.js:1200)
    at e.initialize (onnx.min.js:16162)
    at e.<anonymous> (onnx.min.js:16139)
    at onnx.min.js:16071
    at Object.next (onnx.min.js:16084)
    at a (onnx.min.js:15972)

Project code

Uncaught (in promise) TypeError: unrecognized operator 'Squeeze'

After exporting a model from PyTorch to ONNX, I attempted to load the model using the code in the README and encountered this error:

Uncaught (in promise) TypeError: unrecognized operator 'Squeeze'
    at t.createOperator (session-handler.ts:180)
    at t.resolve (session-handler.ts:72)
    at e.initializeOps (session.ts:252)
    at session.ts:92
    at t.event (instrument.ts:294)
    at e.initialize (session.ts:81)
    at e.<anonymous> (session.ts:63)
    at inference-session-impl.ts:16
    at Object.next (inference-session-impl.ts:16)
    at a (inference-session-impl.ts:16)

I looked at the list of supported operators, and it looks like Squeeze isn't listed on it. Are there plans to support it?

Error loading model: unrecognized input '' for node: unnamed_LSTM_0

I exported a Pytorch model to ONNX, but I cannot load it on the browser for inference.
Details about my problem:

ONNX version: 1.5.0
Python version: Python 3.6.7 :: Anaconda, Inc.
Browser: Google Chrome 75.0.3770.100
ONNX model: https://drive.google.com/file/d/16JIuHKmp9vA-CL_6z6l8KjwZ1Qcnf-Jz/view?usp=sharing

Code:

<html>
  <head> </head>

  <body>
    <!-- Load ONNX.js -->
    <script src="https://cdn.jsdelivr.net/npm/onnxjs/dist/onnx.min.js"></script>
    <!-- Code that consume ONNX.js -->
    <script>
       // Create an ONNX inference session with default backend.
       const session = new onnx.InferenceSession();
            
        // Load an ONNX model.
        session.loadModel("./NumberPredictor.onnx").then((success) => {
            console.log("success", success);
        }).catch((error) => {
            console.log("error", error);
        });
    </script>
  </body>
</html>

Error message: Error: unrecognized input '' for node: unnamed_LSTM_0

Add support for dynamic input sizes

I have an FCN which can theoretically take any size input. However, an error is returned if I use any input size other than that defined in the onnx file. It would be very helpful if dynamic input sizes were to be supported.

onnx.js inside webworker

we are running onnx.js in a web worker inside chrome, but we want to utilise the GPU as well, is there anyway to achieve this?
currently onnx.js just stop working after reporting window is not defined

loading onnx.js via importScripts('./onnx.js') by the way...

CDN file very far behind master

When importing onnxjs via <script src="https://cdn.jsdelivr.net/npm/onnxjs/dist/onnx.min.js"></script>, the version of onnxjs obtained is very old, missing a couple of essential fixes. Is there any way get a newer version of onnxjs using <script>?

InstanceNormalization op works with cpu backend, but fails with wasm backend.

Hi @hariharans29, I just tried out the newly InstanceNormalization op, as in #18 . With cpu backend, it works good.

But when using wasm backend, it gives the following RuntimeError error when running with Chrome browser (72.0.3626.119) on Windows 10:

Uncaught (in promise) RuntimeError: memory access out of bounds
    at wasm-function[49]:376
    at t._instance_normalization_f32 (https://cdn.jsdelivr.net/npm/[email protected]/dist/onnx.min.js:21:64093)
    at e.t.func (https://cdn.jsdelivr.net/npm/[email protected]/dist/onnx.min.js:21:48678)
    at e.t.ccall (https://cdn.jsdelivr.net/npm/[email protected]/dist/onnx.min.js:21:48067)
    at e.run (https://cdn.jsdelivr.net/npm/[email protected]/dist/onnx.min.js:21:81058)
    at t.<anonymous> (https://cdn.jsdelivr.net/npm/[email protected]/dist/onnx.min.js:21:223361)
    at https://cdn.jsdelivr.net/npm/[email protected]/dist/onnx.min.js:21:220510
    at Object.next (https://cdn.jsdelivr.net/npm/[email protected]/dist/onnx.min.js:21:220615)
    at https://cdn.jsdelivr.net/npm/[email protected]/dist/onnx.min.js:21:219528
    at new Promise (<anonymous>)
wasm-function[49]	@	wasm-0000006e-49:205
t._instance_normalization_f32	@	onnx-wasm.js:8
t.func	@	wasm-binding-core.ts:154
t.ccall	@	wasm-binding-core.ts:116

Here is the code:

<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/onnx.min.js"></script>
<script>
    const sess = new onnx.InferenceSession({backendHint: 'wasm'});

    sess.loadModel("./onnx_models/candy_128x128_wasm.onnx").then(()=>{

    const x = new Float32Array(1*3*128*128).fill(1);
    const inputT = new onnx.Tensor(x, 'float32', [1,3,128,128]);

    sess.run([inputT]).then(output=>{
        const outputT = output.values().next().value;

        //console.log(`model output tensor: ${outputT.data}.`);
        console.log(`model output tensor size: ${outputT.size}`);
        });
    });
</script>

I've setup a GitHub page to repro this issue. Both onnx-wasm.wasm and onnx-worker.js are in the same directory as insnorm_test.html as instructed.
https://gnsmrky.github.io/pytorch-fast-neural-style-onnxjs/insnorm_test.html

Please let me know what to do to help out.

TypeError: unrecognized operator 'Upsample'

When using a style transfer model compiled to onnx, I get the following Error in onnxjs 0.1.5:

core.js:15724 ERROR Error: Uncaught (in promise): TypeError: unrecognized operator 'Upsample'
TypeError: unrecognized operator 'Upsample'

I am using the pytorch function

x = F.interpolate(x, mode='nearest', scale_factor=self.scale_factor)

which is translated to the onnx Upsample operator, wich is not supported in onnxjs yet.

Uncaught (in promise) TypeError: cannot resolve operator 'BatchNormalization' with opsets: ai.onnx v6

Uncaught (in promise) TypeError: cannot resolve operator 'BatchNormalization' with opsets: ai.onnx v6

at Object.e.resolveOperator (onnx.min.js:3744)
at t.resolve (onnx.min.js:6679)
at e.initializeOps (onnx.min.js:16130)
at onnx.min.js:16046
at t.event (onnx.min.js:1283)
at e.initialize (onnx.min.js:16044)
at e.<anonymous> (onnx.min.js:16021)
at onnx.min.js:15953
at Object.next (onnx.min.js:15966)
at a (onnx.min.js:15854)

I suspect that this has to do with BatchNormalization only being supported in opsets v7+ (the error indicates that the package is utilizing opset v6). However, per documentation it seems that version 0.1.7 of this package utilizes opset v7. I'm using a webgl backend in browser (Chrome).

compact model exported as onnx - trained on customvision.ai doesnt work

Hi,

I trained a model on customvision.ai with compact settings. I think it is a squeezenet model.
I exported the model in onnx formet (1.0 and 1.2 too) and tried it with the https://github.com/Microsoft/onnxjs/tree/master/examples/browser/squeezenet example.

It is not working, it is not able to load the model.

await session.loadModel("./my.onnx");

Error in Windows 10 / Chrome 70.0.3538.110:

onnx.min.js:1 Uncaught (in promise) TypeError: Cannot read property 'elemType' of null
at Function.t.tensorValueTypeFromProto (onnx.min.js:1)
at new t (onnx.min.js:8)
at t.buildGraph (onnx.min.js:8)
at new t (onnx.min.js:8)
at Object.from (onnx.min.js:8)
at t.load (onnx.min.js:8)
at onnx.min.js:8
at t.event (onnx.min.js:1)
at e.initialize (onnx.min.js:8)
at e. (onnx.min.js:8)

How can I load a model that was trained on customvision.ai ?

webgl extension

with the latest release I'm getting this error with chrome on android. (samsung note 9, samsung s6)

[.WebGL-0xcc479c00]GL ERROR :GL_INVALID_OPERATION : glDrawArrays: GL_BLEND with floating-point color attachments requires the EXT_float_blend extension

[Discussion] WebNN backend

WebNN (Web Neural Network) API is a new proposal that allows to access hardware acceleration (CPU/GPU/AI) for neural networks. WebNN is currently being incubated within W3C Machine Learning for the Web Community Group (WebML CG).

It is expected that the JavaScript ML frameworks are able to leverage WebNN API to offload computation to native (refer to framework-level use cases). So, as an investigation, @pinzhenx and me prototyped a WebNN backend for ONNX.js. Our prototype introduces WebNNBackend , WebNNSessionHandler, WebNNInferenceHandler, WebNNGraphNode and WebNNGraphOp. When loading a ONNX model, the WebNNSessionHandler.transformGraph will try to partition a sub-graph that can be offloaded to WebNN and rewrite the sub-graph with a custom WebNNGraphOp. When running the model, the WebNNGraphOp would first build a equivalent WebNN graph based on the nodes of the sub-graph, and then compile and execute the WebNN graph by native.

When testing on Chromium WebNN prototype, we observed good model inference performance speedup on both CPU and GPU comparing to existing WASM and WebGL backend. The following chart shows the performance results collected for different backends (WASM, WebGL, WebNN-CPU, WebNN-GPU) by the modified ONNX.js Squeezenet and Resnet50 examples.

onnxjs

NOTE: running the examples in vanilla browsers would just use the WebNN polyfill, so you may not observe the performance boost.

The specs of the machine that was used to collect the data are:

  • OS: Ubuntu 16.04.5 LTS
  • Browser: Chromium WebNN prototype (75.0.3739.0), WebNN-CPU is based on MKL-DNN, WebNN-GPU is based on clDNN.
  • CPU: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz, 6 Cores, 12 Logical Processor(s)
  • GPU: Intel (R) UHD Graphics 630
  • Memory: 32GB

WebNN is still in its early stage. However, we'd like to share the initial results and explore WebNN API design for JS ML frameworks optimization with the ONNX.js community.

Thoughts?

Conv1d not supported

It seems like 1-dimensional convolutions doesn't work in onnx js. Given the following network

class Net(nn.Module):
  def __init__(self):
    super().__init__()
    self.conv = nn.Conv1d(1,16,5,padding=2, stride=2)
    self.conv2 = nn.Conv1d(16, 1, 5, padding=2, stride=2)
  
  def forward(self, x):
    x = torch.relu(self.conv(x))
    x = torch.relu(self.conv2(x))
    return x

exported with the following command

torch.onnx.export(n, x, "test_net.onnx", verbose=True, input_names=["audio_in"],
                  output_names=["audio_out"])

when trying to run an inference session with

const sess = new onnx.InferenceSession({backendHint: "cpu"});
await sess.loadModel("test_net.onnx");

let list_input = [];
let i;
for (i=0; i<1024; i++){
    list_input[i] = 0;
};

let tensor_input = new onnx.Tensor(list_input, "float32", [1,1,1024]);
                
const outputMap = await sess.run([tensor_input]);
                
return outputMap.values().next().value.data;

the following error is thrown:

execution-plan.ts:101 Uncaught (in promise) Error: invalid inputs detected; op: unnamed_Conv_0. This error has been found already in issue #103 , but has only got a work around (changing Conv1d with Conv2D). Is there any plan to support Conv1D in onnx.js?

DepthWise conv layer incorrect (Conv.groups) ?

There seems to be a bug when using Convolutional Depthwise layers.

Here is a small script to generate a faulty onnx file:

import torch                                                                     
                                                                                 
                                                                                 
def main():                                                                      
    model = torch.nn.Sequential(                                                 
        torch.nn.Conv2d(                                                         
            in_channels=3, out_channels=8, kernel_size=3, stride=2, padding=1  
        ),                                                                       
        torch.nn.Conv2d(                                                         
            in_channels=8, out_channels=64, kernel_size=3, stride=2, groups=8, padding=1
        ),                                                                       
    )                                                                            
                                                                                 
    model.eval()                                                                 
                                                                                 
    image = torch.rand(1, 3, 120, 120)                                           
                                                                                 
    input_names = ["input"] + ["layer_%s" % i for i in range(4)]                 
    # These must be in the same order as they were appended to the model         
    output_names = ["face_model"]                                                
                                                                                 
    torch.onnx.export(                                                           
        model,                                                                   
        image,                                                                   
        "test.onnx",                                                             
        verbose=True,                                                            
        input_names=input_names,                                                 
        output_names=output_names,                                               
    )                                                                            
                                                                                 
                                                                                 
if __name__ == "__main__":                                                       
    main()      

When you try to load everything works fine, but an error is raised at inference time:
invalid inputs detected; op: unnamed_Conv_1
This is the depthwise layer that is faulty.

I pinpointed down the problem to lib/ops/conv.ts:37 where I believe
const filterInChannel = inputs[1].dims[1] / this.group;
should actually be
const filterInChannel = inputs[1].dims[1] * this.group;

Having tried the fix I encounter WebGL errors:

WebGL: INVALID_OPERATION: texImage2D: ArrayBufferView not big enough for request
(index):1 [.WebGL-0x7fb325036a00]RENDER WARNING: texture bound to texture unit 1 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering.

I'm going to dive a little more to understand this problem.

only support ONNX model with IR_VERSION=3

I converted a keras model using onnxmltools.convert_keras with target_opset=7 but the IR_VERSION is set to 4.
Is there a way to have compatiblility between the export and your lib?

Best regards,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.