Giter VIP home page Giter VIP logo

wonnx's Introduction

WONNX

GitHub Workflow Status docs.rs Crates.io (latest) Crates.io

Wonnx is a GPU-accelerated ONNX inference run-time written 100% in Rust, ready for the web.

Supported Platforms (enabled by wgpu)

API Windows Linux & Android macOS & iOS
Vulkan
Metal
DX12 ✅ (W10 only)
DX11 🚧
GLES3 🆗

✅ = First Class Support — 🆗 = Best Effort Support — 🚧 = Unsupported, but support in progress

Getting started

From the command line

Ensure your system supports either Vulkan, Metal or DX12 for access to the GPU. Then either download a binary release, or install Rust and run cargo install --git https://github.com/webonnx/wonnx.git wonnx-cli to install the CLI.

The CLI tool (nnx) provides a convenient interface for tinkering with models (see the README for more information):

nnx info ./data/models/opt-squeeze.onnx
nnx infer ./data/models/opt-squeeze.onnx -i data=./data/images/pelican.jpeg --labels ./data/models/squeeze-labels.txt --top 3

From Rust

Add the wonnx crate as dependency (cargo add wonnx if you have cargo-add). Then, see the examples for usage examples, or browse the API docs.

From Python

pip install wonnx

And then, to use:

from wonnx import Session
session = Session.from_path(
    "../data/models/single_relu.onnx"
)
inputs = {"x": [-1.0, 2.0]}
assert session.run(inputs) == {"y": [0.0, 2.0]}

Then run python3 with the above Python code!

For more details on the Python package including build instructions, see wonnx-py.

In the browser, using WebGPU + WebAssembly

npm install @webonnx/wonnx-wasm

And then, on the client side:

import init, { Session, Input } from "@webonnx/wonnx-wasm";

// Check for WebGPU availability first: if(navigator.gpu) { .. }
await init();
const session = await Session.fromBytes(modelBytes /* Uint8Array containing the ONNX file */);
const input = new Input();
input.insert("x", [13.0, -37.0]);
const result = await session.run(input); // This will be an object where the keys are the names of the model outputs and the values are arrays of numbers.
session.free();
input.free();

The package @webonnx/wonnx-wasm provides an interface to WONNX, which is included as WebAssembly module and will use the browser's WebGPU implementation. See wonnx-wasm-example for a more complete usage example involving a bundler.

For more details on the JS/WASM package including build instructions, see wonnx-wasm.

For development

To work on wonnx itself, follow the following steps:

  • Install Rust
  • Install Vulkan, Metal, or DX12 for the GPU API.
  • git clone this repo.
git clone https://github.com/webonnx/wonnx.git

Then, you're all set! You can run one of the included examples through cargo:

cargo run --example squeeze --release

Running other models

  • To run an onnx model, first simplify it with nnx prepare (substitute with cargo run -- prepare when inside this repo):
nnx prepare -i ./some-model.onnx ./some-model-prepared.onnx

To specify dynamic dimension parameters, add e.g. --set batch_size=1.

You can also use an external tool, such as onnx-simplifier, with the command:

# pip install -U pip && pip install onnx-simplifier
python -m onnxsim mnist-8.onnx opt-mnist.onnx
cargo run --example mnist --release

Tested models

  • Squeezenet
  • MNIST
  • BERT

GPU selection

Except when running in WebAssembly, you may set the following environment variables to influence GPU selection by WGPU:

  • WGPU_ADAPTER_NAME with a substring of the name of the adapter you want to use (e.g. 1080 will match NVIDIA GeForce 1080ti).
  • WGPU_BACKEND with a comma separated list of the backends you want to use (vulkan, metal, dx12, dx11, or gl).
  • WGPU_POWER_PREFERENCE with the power preference to choose when a specific adapter name isn't specified (high or low)

Contribution: On implementing a new Operator

Contributions are very much welcomed even without large experience in DL, WGSL, or Rust. I hope that this project can be a sandbox for all of us to learn more about those technologies beyond this project's initial scope.

To implement an operator all you have to do is:

  1. Add a new matching pattern in compiler.rs
  2. Retrieve its attributes values using the get_attribute function:
    let alpha = get_attribute("alpha", Some(1.0), node);
    // or without default value
    let alpha = get_attribute::<f32>("alpha", None, node);
  1. Add any variable you want to use in the WGSL shader using context.
  2. Write a new WGSL template in the templates folder.

Available types are in structs.wgsl but you can also generate new ones within your templates.

  1. Respect the binding layout that each entry is incremented by 1 starting from 0, with input first and output last. If the number of binding is above 4. Increment the binding group. You can change the input within sequencer.rs
  2. Write the logic.

There is default variables in the context:

  • {{ i_lens[0] }}: the length of the input 0. This also work for output: {{ o_lens[0] }} and other input {{ i_lens[1] }}
  • {{ i_shape[0] }}: the array of dimensions of input 0. To get the first dimension of the array, just use: {{ i_shape[0][0] }}
  • {{ i_chunks[0] }}: the size of the chunks of each dimensions of input 0. By default, each variable is represented as a long array of values where to get to specific values you have to move by chunks. Those chunks are represented within this variable. To get the size of the chunks of the first dimensions use: {{ i_chunks[0][0] }}.
  • {{ op_type }} the op type as some op_type like activation are using the same template.
  1. Test it using the utils function and place it in the tests folder. The test can look as follows:
#[test]
fn test_matmul_square_matrix() {
    // USER INPUT

    let n = 16;
    let mut input_data = HashMap::new();

    let data_a = ndarray::Array2::eye(n);
    let mut data_b = ndarray::Array2::<f32>::zeros((n, n));
    data_b[[0, 0]] = 0.2;
    data_b[[0, 1]] = 0.5;

    let sum = data_a.dot(&data_b);

    input_data.insert("A".to_string(), data_a.as_slice().unwrap());
    input_data.insert("B".to_string(), data_b.as_slice().unwrap());

    let n = n as i64;
    let model = model(graph(
        vec![tensor("A", &[n, n]), tensor("B", &[n, n])],
        vec![tensor("C", &[n, n])],
        vec![],
        vec![],
        vec![node(vec!["A", "B"], vec!["C"], "MatMul", "MatMul", vec![])],
    ));

    let session =
        pollster::block_on(wonnx::Session::from_model(model)).expect("Session did not create");

    let result = pollster::block_on(session.run(input_data)).unwrap();

    // Note: it is better to use a method that compares floats with a tolerance to account for differences
    // between implementations; see `wonnx/tests/common/mod.rs` for an example.
    assert_eq!((&result["C"]).try_into().unwrap(),sum.as_slice().unwrap());
}

Check out tera documentation for other templating operation: https://tera.netlify.app/docs/

  1. If at any point you want to do optimisation of several nodes you can do it within sequencer.rs.

Supported Operators (ref ONNX IR)

Operator Since version Implemented Shape inference supported
Abs 13, 6, 1
Acos 7
Acosh 9
Add 14, 13, 7, 6, 1
And 7, 1
ArgMax 13, 12, 11, 1
ArgMin 13, 12, 11, 1
Asin 7
Asinh 9
Atan 7
Atanh 9
AveragePool 11, 10, 7, 1
BatchNormalization 15, 14, 9, 7, 6, 1
BitShift 11
Cast 13, 9, 6, 1
Ceil 13, 6, 1
Clip 13, 12, 11, 6, 1
Compress 11, 9
Concat 13, 11, 4, 1
ConcatFromSequence 11
Constant 13, 12, 11, 9, 1
ConstantOfShape 9
Conv 11, 1
ConvInteger 10
ConvTranspose 11, 1
Cos 7
Cosh 9
CumSum 14, 11
DepthToSpace 13, 11, 1
DequantizeLinear 13, 10
Det 11
Div 14, 13, 7, 6, 1
Dropout 13, 12, 10, 7, 6, 1
Einsum 12
Elu 6, 1
Equal 13, 11, 7, 1
Erf 13, 9
Exp 13, 6, 1
Expand 13, 8
EyeLike 9
Flatten 13, 11, 9, 1
Floor 13, 6, 1
GRU 14, 7, 3, 1
Gather 13, 11, 1 ✅ (axis=0)
GatherElements 13, 11
GatherND 13, 12, 11
Gemm 13, 11, 9, 7, 6, 1 ✅*
GlobalAveragePool 1
GlobalLpPool 2, 1
GlobalMaxPool 1
Greater 13, 9, 7, 1
GridSample 16
HardSigmoid 6, 1
Hardmax 13, 11, 1
Identity 16, 14, 13, 1
If 16, 13, 11, 1
InstanceNormalization 6, 1
IsInf 10
IsNaN 13, 9
LRN 13, 1
LSTM 14, 7, 1
LeakyRelu 6, 1
Less 13, 9, 7, 1
Log 13, 6, 1
Loop 16, 13, 11, 1
LpNormalization 1
LpPool 11, 2, 1
MatMul 13, 9, 1
MatMulInteger 10
Max 13, 12, 8, 6, 1
MaxPool 12, 11, 10, 8, 1
MaxRoiPool 1
MaxUnpool 11, 9
Mean 13, 8, 6, 1
Min 13, 12, 8, 6, 1
Mod 13, 10
Mul 14, 13, 7, 6, 1
Multinomial 7
Neg 13, 6, 1
NonMaxSuppression 11, 10
NonZero 13, 9
Not 1
OneHot 11, 9 ✅ (axis=-1)
Optional 15
OptionalGetElement 15
OptionalHasElement 15
Or 7, 1
PRelu 9, 7, 6, 1
Pad 13, 11, 2, 1 ✅ (mode=constant, pads>=0)
Pow 15, 13, 12, 7, 1 ✅ (broadcast=0 and data type is f32)
QLinearConv 10
QLinearMatMul 10
QuantizeLinear 13, 10
RNN 14, 7, 1
RandomNormal 1
RandomNormalLike 1
RandomUniform 1
RandomUniformLike 1
Reciprocal 13, 6, 1
ReduceL1 13, 11, 1
ReduceL2 13, 11, 1
ReduceLogSum 13, 11, 1
ReduceLogSumExp 13, 11, 1
ReduceMax 13, 12, 11, 1
ReduceMean 13, 11, 1
ReduceMin 13, 12, 11, 1
ReduceProd 13, 11, 1
ReduceSum 13, 11, 1
ReduceSumSquare 13, 11, 1
Relu 14, 13, 6, 1
Reshape 14, 13, 5, 1
Resize 13, 11, 10
ReverseSequence 10
RoiAlign 16, 10
Round 11
Scan 11, 9, 8
Scatter (deprecated) 11, 9
ScatterElements 16, 13, 11
ScatterND 16, 13, 11
Selu 6, 1
SequenceAt 11
SequenceConstruct 11
SequenceEmpty 11
SequenceErase 11
SequenceInsert 11
SequenceLength 11
Shape 15, 13, 1
Shrink 9
Sigmoid 13, 6, 1
Sign 13, 9
Sin 7
Sinh 9
Size 13, 1
Slice 13, 11, 10, 1
Softplus 1
Softsign 1
SpaceToDepth 13, 1
Split 13, 11, 2, 1
SplitToSequence 11
Sqrt 13, 6, 1
Squeeze 13, 11, 1
StringNormalizer 10
Sub 14, 13, 7, 6, 1
Sum 13, 8, 6, 1
Tan 7
Tanh 13, 6, 1
TfIdfVectorizer 9
ThresholdedRelu 10
Tile 13, 6, 1
TopK 11, 10, 1
Transpose 13, 1
Trilu 14
Unique 11
Unsqueeze 13, 11, 1
Upsample (deprecated) 10, 9, 7
Where 16, 9
Xor 7, 1
Function Since version
Bernoulli 15
CastLike 15
Celu 12
DynamicQuantizeLinear 11
GreaterOrEqual 12
HardSwish 14
LessOrEqual 12
LogSoftmax 13, 11, 1
MeanVarianceNormalization 13, 9
NegativeLogLikelihoodLoss 13, 12
Range 11
Softmax 13, 11, 1
SoftmaxCrossEntropyLoss 13, 12

Known limitations

  • The Clip, Resize, Reshape, Split, Pad and ReduceSum ops accept (typically optional) secondary inputs to set various parameters (i.e. axis). These inputs are only supported if they are supplied as initializer tensors (i.e. do not depend on inputs and are not outputs of other ops), because wonnx pre-compiles all operations to shaders in advance (and must know these parameters up front).

  • Internally 64-bit integers are not supported (the reason is they are not supported in the current version of WGSL); inputs and initializers with 64-bit scalars are converted to 32-bit values (possibly overflowing).

  • For MatMul and Gemm, the matrix dimensions must be divisible by 2, or the output matrix must be of size (1, N). Matrix multiplication only supports floats, not integers (this is a WebGPU/WGSL limitation).

Shape inference

WONNX needs to know the shape of input and output tensors for each operation in order to generate shader code for executing it. ONNX models however do not always contain this information for intermediate values. Shape inference is the process of deducing the shape of intermediate values from the shape of inputs and outputs and the characteristics of each operation.

WONNX supports a limited form of shape inference (the process of determining what the shapes are of the various nodes in a model's graph). Shape inference is available programmatically as well as through the CLI. Before shape inference can be performed, all dynamic dimension parameters need to be replaced with static values. Shape inference only infers output shapes from input shapes for specific supported ops (see the table above). Inference cannot succeed if the shape for any input of a node is not known. Nodes that already have fully defined shapes for their outputs are left unchanged (and the outputs are used for shape inference on nodes that use these outputs as inputs).

To perform shape inference using the CLI, run a command similar to this (here batch_size and sequence_length are dynamic dimension parameters; the -i flag enables shape inference):

nnx prepare model.onnx model-prepared.onnx --set batch_size=1 --set sequence_length=255 -i

To perform shape inference programmatically, use apply_dynamic_dimensions and infer_shapes from the wonnx_preprocessing::shape_inference module.

Constant folding

Some models contain subgraphs whose output can be determined statically, as they do not depend on the specific inputs provided during inference. WONNX can replace such constant intermediate values with static values ('constant folding'). This is supported in the following cases:

  • Output of nodes of the Constant op type (these are replaced with initializers)
  • Output of nodes of the Shape op type where the shape of the input is known (up front or during inference)
  • Output of nodes of which all inputs are constant (possibly after folding), and for which the operator is supported by WONNX.

Constant folding is performed as part of shape inference, unless disabled (from the CLI pass --no-fold-constants to disable). This is done in order to support models that dynamically calculate shapes using operators such as Shape/Squeeze/Unsqueeze depending on dynamically set dimension parameters (e.g. batch size).

License

Licensed under either of

Except for the following files:

  • data/models:

    • mobilenetv2-7.onnx: source, Apache-2.0 license only.
    • squeezenet-labels.txt: source, Apache-2.0 license only.
  • data/images:

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you shall be dual licensed as above, without any additional terms or conditions.

wonnx's People

Contributors

abioy avatar aliemjay avatar ariaghora avatar dependabot[bot] avatar haixuantao avatar maekawatoshiki avatar mayjs avatar philpax avatar pixelspark avatar sludgephd avatar tiero avatar zimond avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wonnx's Issues

Merge several variables for large bindings

Is your feature request related to a problem? Please describe.
Current implementation requires that each variable has its own bindings which slow down the computation

Describe the solution you'd like
Make a hashmap that can both reference the name of the concatenate variable name and the offset number.

Release a new version of the library, and the CLI tool to crates.io

Is your feature request related to a problem? Please describe.

To improve the 'tryability' of wonnx, users should be able to quickly do a cargo install wonnx-cli and run nnx infer .... This would then need to be added to the README as well.

Describe the solution you'd like

We should first release a new version of wonnx to crates.io after the CLI (#53) has merged.

Then we should release wonnx-cli as well (unfortunately we can't publish the workspace as single package and we don't want to merge the CLI in the wonnx package because it comes with all sorts of stuff that users of wonnx that just want the library don't need).

An issue is that we need to fix links to packages (e.g. wonnx-cli refers to wonnx using the path ../wonnx but for crates.io it should probably be a specific wonnx version, or a link to the Github repository. See also rust-lang/cargo#6126).

Describe alternatives you've considered

We might want to consider providing binaries from the releases page on Github as well. If we have those, we can think about adding support for Homebrew.

Additional context

n/a

Can't run a single linear layer

Describe the bug
I try to export a single linear layer from PyTorch and get one of the following errors.
Error 1:
GpuError(CompileError { node: "Gemm_0", error: InvalidInputShape { input_index: 1, input_shape: Shape { dims: [10, 784], data_type: F32 } } })
Error 2:
IrError(OutputNodeNotFound("onnx::Add_4"))

I viewed the resulting onnx file at netron.app at it appeared to be correct.

To Reproduce

  1. Run the following script
torch_model = torch.nn.Linear(784, 10)
model_input = torch.zeros((1, 784))    #This results in error 1. Changing shape to (784,) results in error 2
torch.onnx.export(torch_model,           # model being run
                  model_input,                      # model input (or a tuple for multiple inputs)
                  "onnx/model.onnx",           # where to save the model (can be a file or file-like object)
                  export_params=True,        # store the trained parameter weights inside the model file
                  opset_version=11,             # the ONNX version to export the model to
                  do_constant_folding=True, # whether to execute constant folding for optimization
                  input_names = ['input'],     # the model's input names
                  output_names = ['output'], # the model's output names
  1. Optionally run onnx-simplifier but it doesn't do anything on such a simple model.
  2. Run the following rust program
fn main() {
    #[cfg(not(target_arch = "wasm32"))]
    {
        pollster::block_on(run());
    }
}

async fn run () {
    let model_path = Path::new("onnx/model.onnx");
    let _session = wonnx::Session::from_path(model_path).await.unwrap();
}

Expected behavior
The model should load successfully.

Desktop
PopOS 20.04

Offering the option to use an existing wgpu device and queue in session initialisation

Is your feature request related to a problem? Please describe.
Currently, the API for creating wonnx Sessions requests the device and queue for you, and does not let you pass in your own. I'm looking at using wonnx as part of an existing wgpu context, and would like to reuse the resources I already have initialised.

Describe the solution you'd like
I'd like variants of the Session constructors, or a minor rearrangement of the API, so that users can pass in existing device and queue instances.

Describe alternatives you've considered
Trying to instantiate the session anyway. I'm not entirely sure what would happen if you request the device twice, and it may end up using the wrong device if the host application has explicitly chosen another device to run wgpu operations on.

Additional context
I am also not sure if this is a supported use-case to begin with (embedding wonnx into an existing wgpu application). Are there any potential issues with doing so?

graph info shape is empty

Describe the bug
create a graph with a info node with any op, in wgsl the i_shape[n] global variable is empty

reproduce

  • in conv.wgsl add a line let _ = {{i_shape[1][1]}} which evaluates to index the dim of weight,
  • run test conv_without_pad
  • test crash

LFS data quota exceeded

It looks like the LFS quota has been exceeded, so only the AWS-hosted data.zip can currently be used to get the data files:

$ git lfs fetch origin
fetch: Fetching reference refs/heads/wgpu-backend-default
batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.                                                                                                                                            
error: failed to fetch some objects from 'https://github.com/webonnx/wonnx.git/info/lfs'

I see that opt-mnist.onnx and single_relu.onnx are only kilobytes/bytes, and opt-squeeze.onnx is only a few MB. Is LFS really necessary for so little data?

Culling identity ops does not always work properly

Describe the bug

In the sequencer, we recently added code that removes 'identity' operations (i.e. those that only change metadata of data, not the data itself, such as Reshape, Identity, etc.). The code does this by looking at the next op and replacing the input it receives from the identity op with the input the identity op receives itself: input_a -> A -> output_a -> B -> output_b becomes input_a -> B -> output_b by telling B to use input_a instead of output_a.

However, the next op we consider is not always the next one in the chain: a model such as A -> B, C -> D, B + D -> E can have order A C B D E. Assuming B is an identity op, when our code considers removing node B it should change node E (to point at the output of A) but it will instead look at node D.

Below is an example from BERT-Squad that shows this behaviour:

image

A solution could be to look at all nodes ahead to find the one that uses. However, I think this requires a rethinking of the sequencer's fundamental assumption that the node sequence is the right unit of analysis...

ONNX Backend Test does not pass for Softmax and Pow

It seems that the recently merged Pow and Softmax does not pass ONNX Backend Test on Python.

FAILED tests/test_onnx_backend.py::OnnxBackendNodeModelTest::test_pow_cpu - pyo3_runtime.PanicException: called `Result::unwrap()` on an...
FAILED tests/test_onnx_backend.py::OnnxBackendNodeModelTest::test_pow_example_cpu - pyo3_runtime.PanicException: called `Result::unwrap(...
FAILED tests/test_onnx_backend.py::OnnxBackendNodeModelTest::test_softmax_axis_0_cpu - pyo3_runtime.PanicException: called `Result::unwr...
FAILED tests/test_onnx_backend.py::OnnxBackendNodeModelTest::test_softmax_axis_1_cpu - AssertionError: 
FAILED tests/test_onnx_backend.py::OnnxBackendNodeModelTest::test_softmax_axis_2_cpu - pyo3_runtime.PanicException: called `Result::unwr...
FAILED tests/test_onnx_backend.py::OnnxBackendNodeModelTest::test_softmax_default_axis_cpu - pyo3_runtime.PanicException: called `Result...
FAILED tests/test_onnx_backend.py::OnnxBackendNodeModelTest::test_softmax_large_number_cpu - AssertionError: 
FAILED tests/test_onnx_backend.py::OnnxBackendNodeModelTest::test_softmax_negative_axis_cpu - pyo3_runtime.PanicException: called `Resul...

IrError(Type(ParametrizedDimensionUnsupported("batch")))

Describe the bug
Exporting a HuggingFace model using the recommended method results in the following error:
thread 'main' panicked at 'called 'Result::unwrap()' on an 'Err' value: IrError(Type(ParametrizedDimensionUnsupported("batch")))'
The inclusion of the batch dimension is not only what HuggingFace tool does but also what the official PyTorch docs recommend for exporting to onnx.

To Reproduce

  1. pip install transformers[onnx]
  2. python -m transformers.onnx --model=bert-base-uncased --feature=default onnx/
fn main() {
    #[cfg(not(target_arch = "wasm32"))]
    {
        pollster::block_on(run());
    }
}

async fn run () {
    let model_path = Path::new("onnx/model.onnx");
    let _session = wonnx::Session::from_path(model_path).await.unwrap();
}

Expected behavior
The unwrap call should not encounter an error.

Desktop
Linux PopOS 20.04

perf: alias output to input of identity operations instead of copying

Describe the bug

The Identity, Squeeze, Unsqueeze, Reshape, Flatten and Dropout ops basically forward their single input tensor unchanged (some change the shape of the tensor, but we don't really care about that as the underlying data in the buffer still looks the same).

The library currently generates a shader for such an op that simply copies input to a (new) output buffer. This seems unnecessary; the next op could simply use the output of the input to the identity op.

In the compile stage we could just alias the buffer (either by telling the op following an identity op to look at the identity op's input buffer name, or by inserting a reference to the same buffer in the buffer list). The copy shader should just be removed altogether (we could keep it as it is quite informative to those new to WGSL, as I experienced myself..).

I may have a shot at implementing this later (busy week ahead though) - just putting it here so I won't forget.

Support Stable Diffusion model

Is your feature request related to a problem? Please describe.
I would like to be able to run Stable Diffusion using wonnx

Describe the solution you'd like
At least, these operators are missing and should be implemented before even trying too run Stable Diffusion on wonnx:
Einsum, Erf, Expand, InstanceNormalization, Shape, Slice

This is the minimum based on this guide that simplifies the onnx model (see the simplification table):
https://www.photoroom.com/tech/stable-diffusion-25-percent-faster-and-save-seconds/

Probably many more things will be needed, but I'm creating this issue because it can be a really interesting use case to be able to run SD in rust on the GPU directly.

I don't have much experience with wonnx or even ML, but I decided to create this issue because it surprised me how few operators are missing to run this model. I would need to get more experience with stable diffusion, diffusers library and onnx in python before attempting to port it here, but maybe there are more experienced users interested too.

Re-use intermediate buffers

Is your feature request related to a problem? Please describe.

Currently, WONNX will allocate a buffer for each operator output. This output buffer is then read by at least one subsequent operator. After the output has been read by all operators that use it as input, it is not used any longer, but are not deallocated until the 'Session' is dropped (they will be re-used in future inferences). These buffers take up GPU memory, and because GPUs do no swapping as far as I know, they limit the maximum size of a model we can use.

(Note, I am on a MacBook M1 Max with 64 GB memory shared between CPU-GPU so have not run into this issue myself yet)

Describe the solution you'd like

Pre-allocating buffers is desirable to ensure inference is fast. This means we should not deallocate buffers after we're done with them (however we'd then also would have to allocate them at inference time).

As many models are very 'deep', it is very much possible to pre-allocate a smaller number of buffers and re-use these. A simple example graph:

Input -> A -> B -> C -> Output

In the above, we currently allocate for Input and outputs of A, B and C. If the output for C fits in the output buffer of A, we could simply reuse A's output buffer for C's output: after B is done reading A's output it will never be used anyway (B must use its own output buffer, as it is still reading from A's output buffer).

A more complicated example:

Input -> A 
A -> B -> C
A -> D -> E
C + E -> Output

In this case, the output of 'A' is used by both B and D, and can only be re-used after both B and D have executed.

This should be fairly easy to implement by maintaining some sort of 'buffer pool' while sequencing the DAG into GPU operations, and calculating the minimum number and sizes of buffers that should be allocated. This should have some sort of look-ahead to allocate a bigger buffer if an operator further in the graph needs it (so it can be shared with an 'earlier' operator that requires a smaller buffer)

Describe alternatives you've considered

That would be one of (1) buying a larger GPU, (2) use smaller models only or (3) implement some sort of swapping...

(I might be able to implement this later on)

Add overview of ONNX operator (sets) supported

Is your feature request related to a problem? Please describe.

For users it would be very helpful to know which operators are supported.

Describe the solution you'd like

A table listing all ONNX operators, indicating the level of support in WONNX (complete, partial/incorrect or no implementation) and the shader file the implementation is in (for developers).

The full list can be found here. It seems WONNX currently implements:

  • Abs, Acos, Asin, Atan, Ceil, Cos, Cosh, Exp, Floor, Log, Round, Sign, Sin, Sinh, Sqrt, Tan, Tanh (endomorphism/map.wgsl)
  • Reshape, Dropout, Flatten, Squeeze, Softmax (endomorphism/copy.wgsl)
  • Add, And, Div, Equal, Greater, GreaterOrEqual, Less, LessOrEqual, Mod, Mul, Or, Sub (endomorphism/arithmetic.wgsl)
  • BatchNormalization (endomorphism/batchnormalization.wgsl)
  • Celu, Elu (endomorphism/activation.wgsl)
  • Concat (matrix/concat.wgsl)
  • MaxPool, AveragePool (Conv only support NxCxHxW for the moment.) (pool/aggregate.wgsl)
  • Conv, ConvRelu (Conv only support NxCxHxW for the moment.) (pool/conv_kernel_1.wgsl, pool/conv_kernel_3.wgsl, pool/conv.wgsl).
  • SqueezenetConvGroup (containers/SqueezenetConvGroup.wgsl) (Not sure if this is actually an ONNX operator?)
  • Gemm, MatMul (matrix/gemm_1.wgsl, matrix/gemm.wgsl)
  • Relu, Sigmoid, Softsign, Softplus, Clip (endomorphism/activation.wgsl)
  • Transpose (matrix/transpose.wgsl)

Describe alternatives you've considered

Additional context

Can't run any Model via WebGPU + WASM on Chrome

Description

I could not run even a single model via WebGPU WASM. I tried to run squeeze.html or single_relu or any other self simplified .onnx. I always get a lot of warnings while initializing the model. And when I run an inference, I'm getting the correct tensor shape, but all values with 0.

Tint WGSL reader failure: :11:8 error: invalid type for struct member

Reproduce

Following the description for the examples. Using latest chrome canary

Expected Behaviour

A correct Tensor after inferencing.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: macOS 12.6 (i7, MBP16, 2019, base model)
  • Browser Chrome Canary
  • Version 109.0.5388.0

Training

Is your feature request related to a problem? Please describe.
I am trying to develop a ml project which is supposed to run on Rust. In order to allow model portability, onnx was chosen.

Does wonnx plan to allow training in the future? This would be very useful because otherwise, people may have to rewrite or interface their rust code in python to allow training with python based frameworks.

Describe the solution you'd like
A way to formulate and train the onnx model and save it.

Describe alternatives you've considered
Tract and onnxruntime-rs were identified as the main contenders. However, tract was meant for embedded devices and was not GPU-accelerated, onnxruntime-rs would not build and did not support the latest version of onnxruntime. Both didn't support training.

Additional context

Only perform chain optimization when intermediate values are not (also) used

Currently, for a graph like X -> Conv -> Relu -> Y we fuse to X -> ConvRelu -> Y. This assumes that the output for Conv is not used directly. Usually this is the case but we should check. In general the chain optimization function should only be called for chains where outputs do not 'escape' (i.e. the rest of the graph only reads the output from the chain and not intermediate outputs).

fn optimize_chain(

Sequence seems extremely slow

Describe the bug
I have a model with 159 nodes, and wonnx log claims to have sequenced 220 tensors. And the whole sequence procedure takes up to 230s (!!) on my M1 macbook.

Expected behavior
The whole sequencing step should be done in a bearable time

Desktop

  • OS: Macos
  • Model: M1

Add a way to select GPU

Is your feature request related to a problem? Please describe.

On my old MacBook Pro that has both an iGPU and dGPU, wonnx (wgpu) will select the iGPU. I'd like to be able to select the dGPU as it is possibly much faster.

Describe the solution you'd like

Some way to tell Session (upon creating) which device it should pick. WGPU has some facilities for this (you can tell it a power preference or filter the device list based on integrated/discrete, etc.).

Describe alternatives you've considered

WGPU seems to honor environment variables WGPU_ADAPTER_NAME but only in its own tests. I think having an interface on Session is cleaner as it allows applications to make the choice.

Gemm does not appear to work properly when input dim is Nx2

Describe the bug

In BertSQuAD, there is this Gemm operation:

image

Executing this leads to all zeroes even though the inputs are all non-zero. Looking through the code it seems the shader assumes the second dimension of input B to be at least 4 (it multiplies blocks of 4x4).

To Reproduce

Perform Gemm with an input B of size NxM where M <4, e.g. 768x2 as in my example. Output will be all zeroes.

Expected behavior

Output should be non-zero.

Screenshots
n/a

Desktop (please complete the following information):

  • OS: macOS

Incorrect results for MediaPipe `face_detection_short_range` model

On MediaPipe's face detection network, the wonnx inference result greatly differs from tract's.

face_detection_short_range.onnx.zip

(this network was converted from the original tflite model)

Feed it an arbitrary 128x128 image. The wonnx result looks something like this:

[-31.329308, 22.987724, 110.36664, 112.46082, 109.49552, 151.28168, 70.86194, 13.971132, 27.654364, 39.442307, -8.873068, -33.579136, 11.462783, 13.264291, 33.41782, 53.894753, -29.915123, 86.37331, 158.61485, 38.95253, 67.99216, 99.54569, 39.838703, -128.56976, -161.39238, -63.05768, -63.9815, -141.44418, -134.7866, -70.48507, 14.594941, 86.63411, -18.900349, 152.99591, 241.42319, 77.663086, 21.074593, -8.589523, -26.858927, -137.44624, -225.09888, -124.80825, -69.34065, -146.73308, -201.0738, -159.46045, -28.638336, 72.56891, -23.126455, 134.3441, 337.617, 257.8597, 166.15813, 83.97563, 58.88012, 17.494957, -77.061226, -68.6636, 25.775728, 6.809413, -65.981094, -101.012245, -62.723034, 23.820261, -26.005894, 141.68248, 327.7928, 293.8865, 212.42795, 200.31885, 245.90173, 177.6146, 15.378365, -63.167755, 23.42553, 90.33595, 67.60708, -22.951397, -116.824715, -50.5637, -31.34872, 146.17146, 372.9728, 289.93735, 207.12747, 228.83446, 320.10992, 259.61838, 40.213455, -43.39573, 17.88327, 89.107925, 114.56751, 17.928455, -116.64837, -63.613777, -30.568298, 156.38216, 355.94586, 262.23032, 182.57199, 171.51064, 302.5447, 315.27197, 139.17767, 8.146385, 25.70171, 31.875103, 25.77427, -49.93356, -103.18455, -15.509784, -32.35147, 141.4279, 349.84766, 272.8423, 191.99379, 121.93088, 252.81538, 303.8322, 185.5089, 55.215675, 65.160355, 84.71897, 36.791264, -44.182625, -45.91356, 35.379143, -37.807976, 121.27687, 332.08752, 273.1187, 202.1815, 103.323906, 163.84425, 248.96017, 158.7855, 64.62752, 105.98001, 166.0515, 149.22621, 44.38842, -7.6704082, 26.018696, -38.82416, 117.7954, 338.27625, 226.56522, 197.4386, 131.23521, 143.07733, 243.10326, 198.69801, 105.70943, 140.65405, 199.46774, 206.77307, 66.399635, -82.90093, -50.489292, -35.01958, 120.69771, 336.47873, 156.58215, 136.86258, 134.20251, 151.27469, 243.63339, 247.01381, 163.91219, 129.86205, 198.4202, 215.65662, 74.31282, -89.870415, -47.996582, -31.968904, 132.96315, 333.59497, 97.59804, 62.57775, 83.27334, 122.128456, 239.68765, 215.3507, 123.76564, 76.362564, 176.75986, 249.7019, 132.31395, -35.86292, -28.389395, -35.614105, 143.59947, 348.2426, 122.428085, 39.762253, 42.45451, 79.04909, 187.85025, 150.7692, 23.673624, 2.6443653, 171.36761, 313.45264, 201.77142, 17.929651, -13.183823, -44.12065, 145.0343, 389.65253, 247.26779, 160.96178, 135.09111, 136.88925, 225.8276, 272.2921, 135.10902, 47.959064, 185.70529, 396.7454, 369.82935, 173.2155, 85.17445, -30.185629, 120.23997, 350.4025, 454.582, 461.1924, 458.9776, 463.3496, 485.1113, 498.21774, 454.50623, 439.25137, 422.0701, 453.4772, 386.24908, 406.1114, 319.17062, -23.54309, 95.79411, 212.816, 256.079, 265.09653, 265.3129, 269.25787, 291.85925, 275.52267, 241.39058, 234.68846, 271.96817, 262.6012, 219.36761, 231.03804, 243.14087, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -9.87351, 37.973244, 51.103134, 48.578285, 49.652866, 60.404957, 106.87292, 101.38535, 56.566093, 56.293312, 89.11907, 95.21277, 83.26502, 61.86201, 77.33577, 69.07361, 79.18409, 138.92847, 181.1163, 215.94504, 236.97185, 233.59784, 210.09154, 228.2849, 245.53972, 237.36566, 225.04929, 229.43132, 224.33557, 188.8149, 182.0163, 125.8674, 86.07349, 179.47354, 207.62651, 250.15334, 273.64642, 293.70636, 312.54776, 294.19324, 324.56445, 313.9674, 276.87564, 272.10507, 271.448, 248.49161, 227.76477, 157.04295, 106.09241, 212.29156, 233.28769, 271.13254, 288.49255, 305.8798, 333.95157, 349.1335, 343.70877, 339.9859, 311.19635, 291.55377, 293.05148, 273.46484, 250.39078, 173.54933, 109.25832, 220.85265, 220.90802, 269.47516, 302.44327, 318.79645, 329.64618, 341.9366, 365.34885, 340.03278, 319.9911, 283.74725, 267.63406, 243.64922, 234.66418, 181.11899, 110.322174, 221.78389, 279.75882, 277.67328, 301.44623, 337.6459, 320.74615, 322.55038, 349.62177, 345.95334, 316.9549, 313.1194, 293.6277, 264.63467, 252.0369, 205.40833, 109.48092, 219.64037, 261.8761, 286.09738, 289.66235, 330.1926, 325.9807, 335.08505, 336.1053, 321.3948, 300.92133, 306.7274, 325.01358, 316.77594, 317.1, 238.35164, 110.360435, 220.87146, 247.15448, 271.09164, 284.03564, 311.93762, 310.70993, 291.3311, 306.41898, 295.98044, 300.85284, 299.01123, 331.1102, 340.23538, 330.2241, 232.47932, 110.77753, 216.44028, 235.51767, 254.16618, 250.62251, 272.02228, 300.52942, 276.46213, 299.19946, 311.52844, 307.0549, 315.46048, 316.299, 312.3796, 302.92957, 227.89526, 111.09145, 211.58331, 243.63704, 250.54405, 247.53508, 259.9467, 280.97687, 304.4244, 318.49417, 343.96628, 330.82813, 315.5434, 327.4121, 310.90912, 304.63266, 235.39078, 112.1188, 212.16103, 243.17007, 262.8314, 254.96289, 264.80063, 277.2805, 290.22345, 307.10867, 312.29462, 329.4661, 331.44318, 328.89102, 333.2755, 321.396, 234.5902, 112.96973, 213.52034, 239.59467, 253.97182, 254.64551, 257.25098, 272.65002, 293.9613, 299.2639, 308.02502, 291.6625, 316.25308, 315.95004, 299.27655, 269.56485, 204.5123, 115.6041, 226.00871, 266.02112, 270.74982, 271.12155, 277.92535, 283.26755, 293.10446, 297.92456, 339.49713, 330.67535, 308.77396, 289.34265, 277.7216, 287.7869, 222.76364, 118.06288, 245.19221, 282.30103, 290.9984, 282.1003, 297.68204, 301.73816, 312.8238, 306.7691, 316.25702, 330.25613, 321.35086, 293.30295, 277.0936, 304.25375, 238.13641, 137.54486, 266.12363, 269.46082, 332.3186, 336.60788, 339.4608, 350.32654, 379.0903, 364.94128, 305.8697, 325.38693, 342.8588, 333.15533, 252.3268, 271.48053, 237.1212, 122.38269, 215.04503, 236.01726, 202.70093, 194.07703, 193.25366, 190.99767, 188.03526, 213.47209, 203.6036, 181.08398, 163.0477, 192.4616, 214.8665, 190.42497, 90.23259, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 156.23738, 241.53708, 218.20499, 169.00739, 144.89183, 177.85963, 524.1284, 685.86084, 361.9603, 351.24893, 695.61224, 781.52405, 607.7816, 383.16144, 593.0505, 340.74475, 809.092, 597.8974, 812.13916, 1267.5723, 1320.6235, 1252.3402, 1186.559, 1602.8494, 1839.1644, 1605.7833, 1490.8713, 1629.5581, 1619.0891, 1278.3308, 1098.1517, 480.58762, 754.1036, 679.5288, 596.44275, 1194.54, 1407.0907, 1596.4607, 1743.7407, 1864.6836, 2227.0383, 1965.0809, 1586.5001, 1685.1604, 1849.9476, 1634.8962, 1288.8682, 592.88275, 843.5233, 915.92883, 601.144, <snip>, 155.55405, 61.27526, -42.05531, 348.378, 835.35803, 863.9722, 790.95197, 939.2812, 1425.7484, 1562.0984, 1252.0969, 1233.8395, 1488.6783, 1406.5677, 1078.3866, 770.61237, 411.8956, 34.913177, 116.26817, 407.89243, 1066.5634, 1379.72, 1580.8806, 1653.4653, 1913.3236, 2314.169, 1966.6119, 1680.3176, 1912.3591, 2005.5537, 1773.0588, 1255.4495, 720.39734, 38.979324, 196.82713, 186.36269, 678.7437, 1084.1102, 1444.976, 1597.7006, 1861.2749, 2160.7344, 2011.6355, 1585.8868, 1572.5958, 1741.6343, 1767.9537, 1512.0267, 943.7152, 39.48392, 207.4052, 172.12448, 578.2098, 1020.2622, 1157.555, 1065.1475, 1408.9583, 1956.9286, 2016.8833, 1661.6292, 1299.6033, 1266.6212, 1440.6885, 1604.8877, 1165.0513, 44.217186, 221.68124, 273.76453, 650.2339, 1062.5769, 1144.1194, 790.63904, 1062.1273, 1807.7668, 1996.4713, 1682.8134, 1402.0927, 1235.4554, 1384.6383, 1627.525, 1273.9063, 43.893173, 218.31326, 334.28098, 758.8089, 1109.7872, 1312.4813, 879.5213, 979.7432, 1525.8883, 1774.6824, 1578.967, 1577.1011, 1691.9797, 1833.3186, 1889.1744, 1345.9971, 44.224125, 210.62032, 266.0575, 660.998, 1029.2434, 1402.4956, 978.79425, 815.92224, 1199.72, 1443.3845, 1398.0369, 1377.227, 1690.906, 2005.1631, 1965.245, 1343.1973, 47.925674, 196.63164, 237.39987, 574.3366, 826.5918, 1252.7296, 1120.6117, 841.7105, 1201.7322, 1448.7743, 1352.479, 1200.8613, 1299.5243, 1664.39, 1743.88, 1284.0634, 49.10148, 188.45915, 246.27213, 669.88574, 758.04016, 1056.3256, 1065.0454, 907.15857, 1142.4675, 1518.1885, 1396.2772, 1131.153, 1184.3779, 1545.316, 1848.687, 1423.3157, 46.348827, 189.74368, 271.18286, 927.1502, 942.98553, 1059.5406, 1052.713, 861.9832, 974.7475, 1250.019, 1420.0405, 1213.045, 1190.003, 1618.4493, 1991.7582, 1498.6685, 44.404472, 196.44255, 291.36853, 1081.9846, 1187.581, 1182.2351, 1126.5781, 903.73553, 1094.4331, 1395.4017, 1462.5508, 1222.8894, 1056.237, 1376.8318, 1702.8779, 1331.5531, 47.226826, 228.77922, 373.912, 1064.0437, 1305.0918, 1345.5797, 1259.3617, 1010.39154, 1231.7257, 1746.2545, 1765.3398, 1164.7042, 679.8337, 1000.2808, 1449.8545, 1231.9479, 52.258812, 228.25693, 259.06003, 797.3658, 1015.23975, 1152.4353, 1198.6597, 991.67566, 853.51483, 1297.8589, 1639.3949, 1163.8215, 328.4614, 430.01962, 1052.9596, 1086.7755, 49.569386, 208.06337, 145.05417, 184.75768, 210.51678, 236.97395, 302.4472, 358.4557, 235.30641, 229.35754, 367.56564, 424.62073, 148.65694, 72.356316, 119.30671, 448.1352, 127.96211, 229.23811, 285.46347, 365.27332, 376.29102, 372.806, 367.61923, 377.08926, 432.6227, 415.1328, 352.51038, 343.0202, 411.22226, 378.7237, 259.66833, 272.73004, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12143.556, 19537.2, 20744.07, 22694.895, 23772.664, 21720.648, 17596.16, 10198.552, 20683.21, 34841.863, 39724.32, 43292.043, 45639.133, 41468.73, 34121.45, 20469.531, 22852.86, 38235.86, 44447.727, 48672.348, 52771.04, 49495.918, 40281.945, 25369.734, 21196.727, 35013.2, 40727.4, 43907.71, 49049.37, 48736.156, 43388.82, 28349.785, 20772.809, 33126.85, 37186.164, 40246.324, 45579.207, 47154.926, 42987.598, 29254.797, 21977.957, 35265.1, 37364.887, 39425.902, 43722.97, 45556.2, 42200.055, 29052.023, 22629.004, 34419.176, 36909.664, 38020.688, 41747.855, 42469.848, 36321.895, 25104.578, 17762.848, 26719.355, 28251.95, 29310.395, 30333.125, 30783.523, 25904.266, 16042.86, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -12945.887, -17338.53, -19348.412, -20567.977, -21244.627, -19395.72, -15064.414, -8338.287, -20954.709, -32185.586, -37449.742, -40744.535, -42355.54, -39357.21, -31307.014, -17918.46, -23029.678, -35037.48, -42105.51, -46375.6, -49414.6, -47367.03, -39262.44, -22789.023, -21473.56, -31344.674, -37688.918, -41978.734, -46208.176, -46741.22, -40843.066, -25224.896, -20976.537, -30089.217, -34733.953, -38026.785, -42667.85, -44933.617, -41273.523, -26465.104, -22383.342, -31626.373, -35359.863, -37684.207, -41657.77, -43781.46, -40295.637, -25984.607, -22764.049, -30991.154, -34473.734, -35996.637, -38578.57, -39428.734, -35689.086, -22516.475, -16854.64, -25117.637, -27023.969, -27724.432, -29175.955, -29341.87, -26225.76, -16299.689, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5771.493, 4779.882, 8713.984, 10161.19, 10784.486, 10404.539, 8984.631, 5088.6133, 5448.3755, 3658.1462, 9508.284, 11193.178, 12744.572, 11144.082, 11552.494, 7269.6914, 6321.6177, 1909.1907, 7475.119, 8024.314, 10628.579, 10380.19, 10636.599, 9097.305, 6602.622, 2251.0972, 8192.902, 7538.809, 10069.0205, 10104.466, 12537.381, 10701.697, 6663.4883, 2196.1116, 7606.288, 8363.408, 9597.228, 9023.021, 10597.621, 10449.119, 6542.0703, 3021.679, 7914.4624, 8154.243, 8959.212, 9594.299, 10574.961, 9983.079, 6522.077, 2407.558, 6876.887, 7394.1997, 8463.4375, 8097.517, 7988.6416, 6974.862, 3784.4429, 4288.2866, 4970.241, 5376.49, 5066.2256, 4965.4976, 7282.197, 7417.331, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5775.127, 4787.1914, 8715.101, 10162.52, 10785.991, 10405.892, 8985.152, 5088.2144, 5454.908, 3678.8184, 9520.395, 11205.776, 12757.755, 11154.87, 11558.783, 7273.0156, 6330.591, 1935.5916, 7493.944, 8044.8047, 10648.816, 10396.338, 10645.481, 9100.816, 6609.299, 2275.7842, 8209.598, 7556.9966, 10087.107, 10120.678, 12547.596, 10705.025, 6669.639, 2218.326, 7620.925, 8378.189, 9614.035, 9040.458, 10609.617, 10453.035, 6548.967, 3044.4963, 7927.7417, 8167.558, 8975.044, 9609.35, 10586.033, 9988.196, 6529.252, 2430.6072, 6892.6416, 7408.916, 8480.336, 8114.1294, 8000.376, 6982.673, 3794.6697, 4304.592, 4984.8438, 5391.5117, 5082.0127, 4981.212, 7289.302, 7418.995, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 10486.294, 14502.297, 16351.223, 17721.363, 18519.955, 16888.822, 13408.9375, 7637.238, 16280.452, 26333.154, 31209.37, 33669.684, 34809.785, 32317.53, 26423.428, 15339.281, 18347.686, 30436.668, 35755.676, 39251.98, 41621.07, 39370.258, 32514.633, 19718.063, 17093.305, 27994.395, 32697.676, 35872.098, 38939.586, 38971.176, 34418.453, 21627.64, 16489.063, 26510.512, 30020.188, 32740.865, 36275.418, 37826.41, 34488.95, 22630.896, 17643.623, 27733.207, 29802.453, 31702.307, 34929.563, 36448.867, 33776.62, 21871.12, 17991.596, 27446.766, 29719.383, 30867.021, 33329.27, 33692.023, 30212.375, 19684.184, 13544.417, 21403.748, 23072.805, 23859.244, 24679.264, 24665.611, 21195.607, 12748.409, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5165.4375, 6811.882, 7738.329, 8170.6626, 8409.369, 7551.087, 5669.677, 2988.025, 9055.888, 13098.383, 15207.349, 16498.633, 16893.68, 16008.253, 12499.527, 6721.186, 9661.512, 14483.381, 17171.826, 19093.633, 20061.174, 19470.082, 16247.816, 8956.228, 8868.818, 12669.457, 14942.524, 17160.531, 18673.441, 19244.213, 16459.28, 9605.363, 8679.765, 12304.512, 13845.98, 15335.543, 17124.514, 18392.361, 16856.05, 10287.746, 9468.955, 12813.319, 14198.071, 15292.392, 16848.951, 17905.486, 16480.816, 9849.591, 9433.248, 12903.702, 14265.032, 14986.261, 15802.292, 16284.515, 15126.505, 9087.682, 6846.506, 10285.445, 11125.031, 11323.073, 12050.346, 12179.558, 10894.793, 6526.217, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 11804.374, 15924.507, 18035.201, 19239.443, 19902.621, 18283.436, 14331.175, 8171.8374, 18608.133, 29129.549, 33880.508, 36569.94, 38035.45, 35181.35, 28547.361, 16407.514, 21048.725, 33238.66, 38912.24, 42409.0, 45087.793, 42825.863, 35318.008, 21001.393, 19737.56, 30441.605, 35752.89, 39117.574, 42476.734, 42366.277, 37178.875, 23297.34, 19175.281, 28854.87, 32809.598, 35736.43, 39605.16, 41139.645, 37377.203, 24348.957, 20232.945, 30067.498, 32769.938, 34815.156, 38273.594, 39793.855, 36489.355, 23650.678, 20501.875, 28666.19, 31465.828, 32761.414, 35278.285, 35677.04, 32116.748, 20558.104, 15978.316, 23900.07, 25352.713, 26057.756, 27248.977, 26934.451, 23653.426, 14616.846, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, -2305.3599, -4728.827, -5264.5405, -5363.52, -5437.37, -4946.22, -3665.9746, -2051.7087, -5554.5366, -8979.697, -10321.228, -11178.336, -11631.225, -10953.655, -8420.695, -4652.7397, -6205.475, -9714.384, -11909.738, -12973.26, -13820.744, -13412.405, -10993.847, -5947.511, -5533.7446, -8722.371, -10757.849, -11922.922, -13107.36, -13227.531, -11205.331, -6550.592, -5452.262, -8396.947, -9930.9, -10892.873, -12143.135, -12715.83, -11388.42, -6956.035, -5769.433, -8768.501, -10181.801, -10844.123, -11823.061, -12358.688, -11124.595, -6893.891, -5377.4775, -8068.059, -9232.23, -9729.437, -10260.813, -10552.788, -9595.619, -5588.0596, -4384.7036, -6892.89, -7396.6904, -7451.602, -8076.794, -7878.1455, -7079.6914, -3998.1917, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

Tract's (correct) result contains no 0.0 values. It looks like large blocks of the output are just zeroed out with wonnx, and the non-zero values are also wrong.

Fuse mapping ops

Is your feature request related to a problem? Please describe.

When there are consecutive mapping operations (Neg, Relu, etc.) we should not execute these serially each in their own shader - instead we should just write a shader that does neg(relu(input)) in one go (if at least the intermediate result from Relu in this example is not used elsewhere).

Describe the solution you'd like

Fusing should happen in the optimizer. We can introduce a custom op type wonnx.Map that takes one input and an attribute describing the functions to perform consecutively (in the above example it would contain Relu,Neg).

To also accomodate binary functions (Add, Sub, etc.) we might even allow an arbitrary number of inputs and have the attribute describe (in RPN) the desired operations, e.g. neg(relu(add(a, sub(b, c)))) would have three inputs (b, c, a in that order) and the attribute could contain Push, Push, Sub, Push, Add, Relu, Neg. The compiler can then simply write out the WGSL corresponding to this.

Describe alternatives you've considered

Fusing would also be possible at the shape inference stage.

We should check if the current ConvRelu optimization (which fuses Conv and Relu works properly if the output from Conv is also used further on.

Additional context

How to implement dynamic shape ops?

Several ONNX operators such as Reshape and ConstantOfShape take in two inputs, where the first is the input data and the second defines the shape of the output data.

This means that (at least in theory) the shape can be dynamic and so is the shape of the output of the node. This would mean that we cannot compile shaders for the next ops up front because the input shape might have changed (and some operators depend on the input shape). Below is an example from the 'Tiny YOLO v3' model that shows dynamic reshaping:

image

We should think about whether to implement this (necessary if we want to support models like YOLO, BERT) but also how. Possibly the 'shape' parts can be calculated in advance (e.g. before running the meat of the model) but in some cases it is possibly so dynamic that we'd have to compile shaders during runtime (which we'd rather not do because of the performance impact). Would love to your hear thoughts!

Move examples / helpers to separate crates

This is a follow-up of #62 . I think currently most of the utility methods in utils are not need in production. They are just adding up binary / wasm file size. Why don't we move them to a separate crate and add it as a dependency in tests / examples? If you guys think this is OK, I could setup a PR about this.

Fix Python release workflow

Describe the bug

For some reason the workflow for publishing for Windows/Mac works, but fails for Linux.

To Reproduce
See CI results and https://pypi.org/project/wonnx/#history

Expected behavior
Built packages for all platforms.

The workflows are different, perhaps for historical reasons? (I believe on some platforms the scripts used nightly Rust, but this may not be necessary anymore). We might try to unify the script to build all platforms in the same way.

AveragePool fails when output width * height is not a multiple of 4

Describe the bug

When you feed AveragePool an NxCxWxH tensor where the output WxH (which depends on the kernel size) is not divisible by four, the following error occurs:

Shader error:
error: expected ')', found 'u'
   ┌─ wgsl:25:23
   │
25 │         if (gidx < 4.5u) {
   │                       ^ expected ')'

The relevant part of the template (templates/pool/aggregate.wgsl):

if (gidx < {{ o_lens[0] / 4 }}u) {

I ran into this when implementing GlobalAveragePool which is basically AveragePool with the kernel size equal to the image size, i.e. simply averaging to a single number per channel (NxCxWxH -> NxCx1x1).

To Reproduce

Use AveragePool in a way that sets o_lens[0] to something not divisible by 4.

Expected behavior

This should just work - the output may of course be a vector of length rounded up to the next multiple of 4, that is automatically chopped of if it's the output vector (or not relevant when this is an input to the next op).

Make it easier to inject custom op

Currently I must modify source code to add support for a custom op. This is quite inconvenient. I think the large match in compile.rs could be abstracted into a trait, by allowing users to implement the trait and register custom ops in a in-app registry, it would be much easier to extend the framework.

MaxPool op is not correct

Describe the bug
I think Maxpool is not implemented correctly, given there's no test against MaxPool now, here's one adopted from onnx test suite:

#[test]
pub fn test_maxpool() {
    let mut input_data = HashMap::new();

    let data: Vec<f32> = (1..=25).map(|x| x as f32).collect();
    let shape = vec![1, 1, 5, 5];
    input_data.insert("X".to_string(), data.as_slice().into());

    let conv_model = model(graph(
        vec![tensor("X", &shape)],
        vec![tensor("Y", &[1, 1, 2, 2])],
        vec![],
        vec![],
        vec![node(
            vec!["X"],
            vec!["Y"],
            "max_pool",
            "MaxPool",
            vec![
                attribute("kernel_shape", vec![2, 2]),
                attribute("strides", vec![2, 2]),
            ],
        )],
    ));

    let session =
        pollster::block_on(wonnx::Session::from_model(conv_model)).expect("Session did not create");
    let result = pollster::block_on(session.run(&input_data)).unwrap();
    assert_eq!(result["Y"], [7.0, 9.0, 17.0, 19.0]);
}

Adopted from here

wonnx outputs [7.0, 0.0, 0.0, 0.0], which should be [7.0, 9.0, 17.0, 19.0]

Command Builder for optimized flexible node computation

Is your feature request related to a problem? Please describe.
Current implementation of the command encoder is limited to optimization of fixed sized command encoder.

Describe the solution you'd like
I want a minimalist Command builder that could handle not predefined nodes of infinite size.

Default device if no gpu?

Hi there,

I read that wonnx can use gpu through graphics apis like metal and vulkan. Just wondering, does it default to cpu inference if there is no gpu?

Thanks

Make comparison between CPU and GPU less strict

Is your feature request related to a problem? Please describe.
Experimentation shows that results on NVIDIA GPUs is a bit further from CPU results, for some reason, than it is on e.g. Apple M1. An example on an 1080 Ti:

cargo run --features=cpu --release -- infer ./data/models/opt-squeeze.onnx -i data=./data/images/pelican.jpeg --labels ./data/models/squeeze-labels.txt --top 3 --compare --benchmark
Error: Comparison("output element 285 differs too much: GPU says 8.999586 vs CPU says 8.999575 (difference is 0.000011444092)")

Describe the solution you'd like

Allow slightly more difference to exist between CPU and GPU before showing a warning (or make this configurable).

license clarification

In the cargo.toml files, I find that the license is "MIT OR Apache-2.0". However, only a copy of MIT license checked in at the root.

We would very much prefer if the code is open sourced under the dual license. We are excited about your work and would like to bring some parts to our young project for burn wgpu backend. Burn is opened source under MIT and Apache-2.0 and would be easy to port some of your code. We will comply with copyright rules and noticed as required.

Incorrect results for Transpose

Describe the bug
Transpose appears to produce incorrect results for certain permutations.

To Reproduce

In the following test case, the following work (like in NumPy):

  • Transpose perm=0,2,1,3 followed by Transpose perm=0,2,1,3which should reverse the first transpose
  • Transpose perm=0,3,2,1 followed by Transpose perm=0,3,2,1which should reverse the first transpose

The following works in NumPy (see expected behaviour below), but fails in woonx:

  • Transpose perm=0,2,3,1 followed by Transpose perm=0,3,1,2 which should reverse the first transpose

(Note that in this case, the first perm is not equal to the latter).

fn test_transpose_4d_perm(transpose_first: &[i64], transpose_second: &[i64]) {
    let mut input_data = HashMap::new();
    let data = (0..2 * 3 * 4).map(|x| x as f32).collect::<Vec<f32>>();
    input_data.insert("X".to_string(), data.as_slice().into());

    let x_dims = vec![1, 2, 3, 4];
    let intermediate_dims: Vec<i64> = transpose_first
        .iter()
        .map(|i| x_dims[*i as usize])
        .collect();

    // Model: X -> Transpose -> Y -> Transpose -> Z; X==Z
    let model = model(graph(
        vec![tensor("X", &x_dims)],
        vec![tensor("Z", &x_dims)],
        vec![tensor("Y", &intermediate_dims)],
        vec![],
        vec![
            node(
                vec!["X"],
                vec!["Y"],
                "Transpose",
                "Transpose",
                vec![attribute("perm", transpose_first.to_vec())],
            ),
            node(
                vec!["Y"],
                vec!["Z"],
                "Transpose",
                "Transpose",
                vec![attribute("perm", transpose_second.to_vec())],
            ),
        ],
    ));

    let session =
        pollster::block_on(wonnx::Session::from_model(model)).expect("session did not create");
    let result = pollster::block_on(session.run(&input_data)).unwrap();

    common::assert_eq_vector((&result["Z"]).try_into().unwrap(), &data);
}

/* This tests the equivalent of the following Python code:
a = np.arange(0,24).reshape((1,2,3,4));
a == a.transpose(a).transpose(inverse of a)
*/
#[test]
fn test_two_transposes_4d() {
    // a == a.transpose([0,2,1,3]).transpose([0,2,1,3])
    test_transpose_4d_perm(&[0, 2, 1, 3], &[0, 2, 1, 3]);

    // ! WORKS in python, FAILS in wonnx...
    // a == a.transpose([0,2,3,1]).transpose([0,3,1,2])
    // test_transpose_4d_perm(&[0, 2, 3, 1], &[0, 3, 1, 2]);
    // a == a.transpose([0,3,2,1]).transpose([0,3,2,1])
    test_transpose_4d_perm(&[0, 3, 2, 1], &[0, 3, 2, 1]);
}

Expected behavior

>>> a = np.arange(0,24).reshape((1,2,3,4));
>>> a
array([[[[ 0,  1,  2,  3],
         [ 4,  5,  6,  7],
         [ 8,  9, 10, 11]],

        [[12, 13, 14, 15],
         [16, 17, 18, 19],
         [20, 21, 22, 23]]]])
>>> a.transpose([0,2,3,1]).transpose([0,3,1,2])
array([[[[ 0,  1,  2,  3],
         [ 4,  5,  6,  7],
         [ 8,  9, 10, 11]],

        [[12, 13, 14, 15],
         [16, 17, 18, 19],
         [20, 21, 22, 23]]]])
>>> a == a.transpose([0,2,3,1]).transpose([0,3,1,2])
array([[[[ True,  True,  True,  True],
         [ True,  True,  True,  True],
         [ True,  True,  True,  True]],

        [[ True,  True,  True,  True],
         [ True,  True,  True,  True],
         [ True,  True,  True,  True]]]])

Conv with bias is not correct

After several days of debugging I think I finally get why wonnx is giving incorrect results. It seems that conv is not calculating bias correctly.

Test:

#[test]
fn conv_bias() {
    let n = 5;
    let c = 1;
    let mut input_data = HashMap::new();

    let data: Vec<f32> = (0..25).map(|x| x as f32).collect();
    let shape = vec![1, c as i64, n as i64, n as i64];
    input_data.insert("X".to_string(), data.as_slice().into());

    let kernel_n = 3;
    let m = 1;
    let data_w: Vec<f32> = (0..18).map(|_| 1.0f32).collect();
    let data_b = vec![0.0, 0.0];
    let conv_model = model(graph(
        vec![tensor("X", &shape)],
        vec![tensor("Y", &[1, 2, 5, 5])],
        vec![tensor("W", &[2, c, 3, 3])], // tensor("B", &[2])],
        vec![initializer("W", data_w)], // initializer("B", data_b)],
        vec![node(
            vec!["X", "W"], // "B"],
            vec!["Y"],
            "conv",
            "Conv",
            vec![
                attribute("kernel_shape", vec![3, 3]),
                attribute("strides", vec![1, 1]),
                attribute("pads", vec![1, 1, 1, 1]),
            ],
        )],
    ));

    let session =
        pollster::block_on(wonnx::Session::from_model(conv_model)).expect("Session did not create");
    let mut result = pollster::block_on(session.run(&input_data)).unwrap();
    assert_eq!(
        Vec::<f32>::try_from(result.remove("Y").unwrap()).unwrap(),
        &[
            12.0, 21.0, 27.0, 33.0, 24.0, 33.0, 54.0, 63.0, 72.0, 51.0, 63.0, 99.0, 108.0, 117.0,
            81.0, 93.0, 144.0, 153.0, 162.0, 111.0, 72.0, 111.0, 117.0, 123.0, 84.0, 12.0, 21.0,
            27.0, 33.0, 24.0, 33.0, 54.0, 63.0, 72.0, 51.0, 63.0, 99.0, 108.0, 117.0, 81.0, 93.0,
            144.0, 153.0, 162.0, 111.0, 72.0, 111.0, 117.0, 123.0, 84.0
        ],
    )
}

Now remove the commented out param of tensor B, the result should just be the same(as bias is zero). Instead wonnx is giving wrong results.

MNIST test case fails after merge of MaxPool 'fix'

Describe the bug

The test_mnist test case currently fails on master. Some digging revealed this happened after the merge of #78 (ca6a5d6):

% git rev-parse HEAD
ca6a5d64ea6edcc30b025e6a112499d8282cff6b

% cargo test --test pretrained_models -- test_mnist --exact --nocapture
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /Users/tommy/Git/wonnx/wonnx-wasm/Cargo.toml
workspace: /Users/tommy/Git/wonnx/Cargo.toml
    Finished test [unoptimized + debuginfo] target(s) in 0.07s
     Running tests/pretrained_models.rs (target/debug/deps/pretrained_models-db8409beb2c5ac6d)

running 1 test
thread 'test_mnist' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `0`', wonnx/tests/pretrained_models.rs:47:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test test_mnist ... FAILED

failures:

failures:
    test_mnist

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.15s

error: test failed, to rerun pass '-p wonnx --test pretrained_models'

It works on the revision before the merge of #78 (656c8c0):

% git rev-parse HEAD                                                   
656c8c0e6817776e756666bcadb613bd07944d8a

% cargo test --test pretrained_models -- test_mnist --exact --nocapture
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package:   /Users/tommy/Git/wonnx/wonnx-wasm/Cargo.toml
workspace: /Users/tommy/Git/wonnx/Cargo.toml
   Compiling wonnx v0.2.4 (/Users/tommy/Git/wonnx/wonnx)
    Finished test [unoptimized + debuginfo] target(s) in 2.96s
     Running tests/pretrained_models.rs (target/debug/deps/pretrained_models-db8409beb2c5ac6d)

running 1 test
test test_mnist ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.22s

There are indeed two 'MaxPool' nodes in that model:

image

image

To Reproduce

cargo test --test pretrained_models -- test_mnist --exact --nocapture

Expected behavior

No test failure :-)

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: macOS

Slice Operator

Currently the Onnx Slice operator is not implemented.

I have just started looking at the WGSL code and may be able to work this out slowly but wanted to know if there is anything that is required before being able to implement the operator?

Cannot install pip package

I tried to install the wonnx pip package, but the installation failed.

To Reproduce
Run pip install wonnx.

Error

(wonnx) user@device ~ % pip install wonnx
Collecting wonnx
  Using cached wonnx-0.1.1.tar.gz (84 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Preparing metadata (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [8 lines of output]
      💥 maturin failed
        Caused by: Cargo metadata failed. Does your crate compile with `cargo build`?
        Caused by: `cargo metadata` exited with an error: error: multiple workspace roots found in the same workspace:
        /private/var/folders/vx/wqbngg455cd06qmc4gym99mw0000gp/T/pip-install-jsl0pkrb/wonnx_93d5408dc72a453daccf3439a157d63e
        /private/var/folders/vx/wqbngg455cd06qmc4gym99mw0000gp/T/pip-install-jsl0pkrb/wonnx_93d5408dc72a453daccf3439a157d63e/local_dependencies/wonnx
      Error running maturin: Command '['maturin', 'pep517', 'write-dist-info', '--metadata-directory', '/private/var/folders/vx/wqbngg455cd06qmc4gym99mw0000gp/T/pip-modern-metadata-jppvjhp_', '--interpreter', '/Users/user/miniconda3/envs/wonnx/bin/python']' returned non-zero exit status 1.
      Checking for Rust toolchain....
      Running `maturin pep517 write-dist-info --metadata-directory /private/var/folders/vx/wqbngg455cd06qmc4gym99mw0000gp/T/pip-modern-metadata-jppvjhp_ --interpreter /Users/user/miniconda3/envs/wonnx/bin/python`
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Desktop (please complete the following information):

  • OS: MacOS 12.5.1

Failed to compile on M1 Mac

Describe the bug
I only ran the following four lines of code, and then a compile error occurred

cargo new wonnx_test
cd wonnx_test
cargo add wonnx
cargo run

Compiling wonnx v0.3.0
error[E0597]: node does not live long enough
--> /Users/ls/.cargo/registry/src/github.com-1ecc6299db9ec823/wonnx-0.3.0/src/ir.rs:123:59
|
96 | impl<'model> Node<'model> {
| ------ lifetime 'model defined here
...
123 | let inputs: Result<Vec<Input<'model>>, IrError> = node
| ___________________________________________________________^
124 | | .get_input()
| | ^
| | |
| |________________________borrowed value does not live long enough
| argument requires that node is borrowed for 'model
...
179 | }
| - node dropped here while still borrowed

  • OS: macOS 13.1
  • rustc 1.66.1 (90743e729 2023-01-10)

Logo in README Not Best For GitHub Dark Theme

Hey there, I just noticed that the logo doesn't look quite right on the GitHub dark theme.

Not sure what the best path forward is, but jus thought I'd let you know in case you don't notice it because you're on the light theme.

image

Neat looking project BTW! 👍

Add testing framework to compare against known-good implementations

Is your feature request related to a problem? Please describe.

In order to be able to test correctness of the implementation it would be a good idea to be able to automatically compare it to some other known-good reference point (ideally the ONNX test suites but less ideally some other mature implementation, i.e. https://github.com/sonos/tract for instance).

Describe the solution you'd like

This could simply be a test that runs a set of ONNX models with specific inputs and outputs in WONNX and some other runtime, and then compares the result. I have tested this approach already here: https://github.com/pixelspark/nnx/blob/main/src/main.rs#L126 (and here's how to do inference with tract).

Describe alternatives you've considered

Well, writing tests that check every corner case by reading the spec very carefully :-)

CLI cannot deserialize models

Describe the bug
The CLI tool cannot deserialize models from this repository. It panics with the following msg:

> RUST_BACKTRACE=1 nnx info ./data/models/opt-squeeze.onnx

thread 'main' panicked at 'Could not deserialize the model: WireError(IncorrectTag(118))', wonnx-cli/src/main.rs:47:14
stack backtrace:
   0: rust_begin_unwind
             at /rustc/878aef79dcdf59d19bb8482202dc55e58ceb62ff/library/std/src/panicking.rs:584:5
   1: core::panicking::panic_fmt
             at /rustc/878aef79dcdf59d19bb8482202dc55e58ceb62ff/library/core/src/panicking.rs:142:14
   2: core::result::unwrap_failed
             at /rustc/878aef79dcdf59d19bb8482202dc55e58ceb62ff/library/core/src/result.rs:1814:5
   3: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
   4: nnx::main

To Reproduce
Steps to reproduce the behavior:

  1. clone repo
  2. cd into cloned dir
  3. install wonnx-cli via cargo install --git https://github.com/webonnx/wonnx.git wonnx-cli
  4. execute nnx info ./data/models/opt-squeeze.onnx

Expected behavior
The CLI should load the model and show some information.

Desktop (please complete the following information):

  • OS: Linux on Kernel 5.19.0
  • GPU: NVIDIA GeForce RTX 3090

Add a command line interface

Is your feature request related to a problem? Please describe.

A CLI can be of value in the following scenarios:

  • To quickly try or tinker with wonnx without programming (i.e. to see if a model runs on wonnx)
  • To quickly obtain metadata of a model (i.e. see inputs/outputs defined by the model)
  • To quickly run inference from the command line (e.g. to sort a bunch of photos)
  • For benchmarking

Describe the solution you'd like

I have a command line utility here that provides the following features:

  • Get model metadata (inputs/outputs, ops used) as well as an option to dump the model graph in GraphViz format
  • Run inference for arbitrary onnx models.
    • Inputs and outputs are automagically translated to tensors (i.e. images are resized automatically to fit an (1,3,x,y) or e.g. (1, 1,x,y) tensor, normalization is applied).
    • Can optionally read a labels file and attach meaning to outputs
    • Preliminary support for BERT-like text encoding (needs some work)
  • Supports using tract as CPU-based backend, if enabled as feature. This can be used as fallback (--fallback), for comparing results (--compare) and to compare the performance (--benchmark)

In the future it would be very easy to add the following things:

  • Code generator to run an ONNX model from e.g. Python using wonnx.
  • HTTP server that provides an inference API for an ONNX model or directory of ONNX models

Describe alternatives you've considered

Not having our own CLI tool, or keeping it as an external tool. I believe there is value in having our own in this repository, especially now that we can cleanly separate it as a separate package in the workspace.

Additional context

I'd be happy to work on integrating my tool into this repository.

Allow --benchmark without --compare in CLI

Is your feature request related to a problem? Please describe.

To time execution of commands you need to currently specify --compare --benchmark and also have --features=cpu.

Describe the solution you'd like

The CLI tool should allow benchmarking without having the CPU feature.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.