Giter VIP home page Giter VIP logo

onnxstack's People

Contributors

cassiebreviu avatar jdluzen avatar jeffward01 avatar kimi0230 avatar riddlemd avatar saddam213 avatar theycallmehex avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnxstack's Issues

nuget

Hi. I wouldn't mind taking your backend from the SD to transfer my old UI. Have you thought about packaging your libraries?

WebUI does not support multi-models or pipelines

WebUI has fallen behind due to the inclusion of the WPF UI

I am wondering if we even need to keep it, other OS's already have many choices for Stable Diffusion so I think people using OnnxStack on linux and mac will be low

Perhaps time to scrap this project and replace with an API instead, easier to maintain, will still be cross-platform, then other people can make a WebUI or integrate the API into existing SD UI's

Or even go the other direction, add auth etc and make it a fully usable website for creating mid-journey style websites, allowing people to monetize their GPUs if they want

Pros:

  1. Cross-Platfrom
  2. Server/Client Based, backend can be installed on GPU server
  3. ASP.NET Core, so auth can easily be added for a Public facing website

Cons:

  1. Server/Client Based, security concerns, xss, etc
  2. Image handling is way harder on web, upload, download etc

Thoughts?

Suggestions, new ideas and general talk

Hello!
thank u for this!
i'm a huge fan of onnx as it is very cpu friendly. could u please try and make AnimateDiff to work with onnx too? it will be great to have text 2 gif and image 2 gif with AnimateDiff for onnx cpu in the future

kind regards

CUDA "invalid argument" Error When Using OnnxStack.Stable Diffusion on GPU

Hello,

I'm looking to use OnnxStack.StableDiffusion in one of my projects, but I'm encountering an error during the image generation process:

info: OnnxStack.StableDiffusion.Diffusers.StableDiffusion.StableDiffusionDiffuser[0]
      [DiffuseAsync] - Diffuser starting...
info: OnnxStack.StableDiffusion.Diffusers.StableDiffusion.StableDiffusionDiffuser[0]
      [DiffuseAsync] - Model: StableDiffusion 1.5, Pipeline: StableDiffusion, Diffuser: TextToImage, Scheduler: LMS
2024-01-23 23:30:53.065717070 [E:onnxruntime:CSharpOnnxRuntime, cuda_call.cc:116 CudaCall] CUDA failure 1: invalid argument ; GPU=0 ; hostname=xxxxx ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/gpu_data_transfer.cc ; line=73 ; expr=cudaMemcpyAsync(dst_data, src_data, bytes, cudaMemcpyDeviceToHost, static_cast<cudaStream_t>(stream.GetHandle())); 
One or more errors occurred. ([ErrorCode:Fail] CUDA failure 1: invalid argument ; GPU=0 ; hostname=572671e17f4d ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/gpu_data_transfer.cc ; line=73 ; expr=cudaMemcpyAsync(dst_data, src_data, bytes, cudaMemcpyDeviceToHost, static_cast<cudaStream_t>(stream.GetHandle())); )

Before going further, here are some details:

  • I conducted various tests, one on Windows and another on Linux using Docker, and I encountered the same error.
  • I based my work on the Dockerfile from the OnnxStack git repository, which I had to adapt to compile my solution (I'll share it below).
  • I followed the various examples available in the repository (see the code I'm sharing below).
  • I use Microsoft.ML.OnnxRuntime.Gpu package

The code (C#):

                string prompt = "A cat";
                var modelOptions = new ModelOptions(m_sdModelSet);
                var res = await _stableDiffusionService.LoadModelAsync(m_sdModelSet);

                var promptOptions = new PromptOptions
                {
                    Prompt = prompt,
                    NegativePrompt = "bad",
                    DiffuserType = DiffuserType.TextToImage,
                };

                var schedulerOptions = new SchedulerOptions
                {
                    Seed = Random.Shared.Next(),
                    GuidanceScale = 5.0f,
                    InferenceSteps = 15,
                    Height = 512,
                    Width = 512,
                    SchedulerType = SchedulerType.LMS,
                };

                // Generate Image Example
                Console.WriteLine("Generate image ...");
                var outputFilename = Path.Combine(_outputDirectory, $"{schedulerOptions.Seed}_{schedulerOptions.SchedulerType}.png");
                var result = await _stableDiffusionService.GenerateAsImageAsync(modelOptions, promptOptions, schedulerOptions);

The problem starts from the call to await _stableDiffusionService.GenerateAsImageAsync.

The configuration file (appsettings.json):

"OnnxStackConfig": {
  "Name": "StableDiffusion 1.5",
  "IsEnabled": true,
  "PadTokenId": 49407,
  "BlankTokenId": 49407,
  "TokenizerLimit": 77,
  "EmbeddingsLength": 768,
  "ScaleFactor": 0.18215,
  "TokenizerType": "One",
  "SampleSize": 512,
  "PipelineType": "StableDiffusion",
  "Diffusers": [
    "TextToImage",
    "ImageToImage",
    "ImageInpaintLegacy"
  ],
  "DeviceId": 0,
  "InterOpNumThreads": 0,
  "IntraOpNumThreads": 0,
  "ExecutionMode": "ORT_SEQUENTIAL",
  "ExecutionProvider": "Cuda",
  "ModelConfigurations": [
    {
      "Type": "Tokenizer",
      "OnnxModelPath": "../resources/models/cliptokenizer.onnx"
    },
    {
      "Type": "Unet",
      "OnnxModelPath": "../stable-diffusion-v1-5/unet/model.onnx"
    },
    {
      "Type": "TextEncoder",
      "OnnxModelPath": "../stable-diffusion-v1-5/text_encoder/model.onnx"
    },
    {
      "Type": "VaeEncoder",
      "OnnxModelPath": "../stable-diffusion-v1-5/vae_encoder/model.onnx"
    },
    {
      "Type": "VaeDecoder",
      "OnnxModelPath": "../stable-diffusion-v1-5/vae_decoder/model.onnx"
    }
  ]
}

The Dockerfile (taken from the OnnxStack git repository):

RUN curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash && apt-get install -y git-lfs
RUN git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 -b onnx
RUN git clone https://huggingface.co/TheyCallMeHex/LCM-Dreamshaper-V7-ONNX
RUN wget -N -t 5 -T 10 http://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb \
    && dpkg -i ./cuda-keyring_1.1-1_all.deb
RUN apt-get update \
    && apt-get install -y libcublaslt11 libcublas11 libcudnn8=8.9.1.23-1+cuda11.8 libcufft10 libcudart11.0
RUN wget http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1f-1ubuntu2.20_amd64.deb && dpkg -i libssl1.1_1.1.1f-1ubuntu2.20_amd64.deb

RUN apt-get update \
    && apt-get install -y dotnet-sdk-7.0 icu-devtools

ENV \
    DOTNET_RUNNING_IN_CONTAINER=true \
    DOTNET_GENERATE_ASPNET_CERTIFICATE=false \
    DOTNET_NOLOGO=true \
    NUGET_XMLDOC_MODE=skip

COPY . .
[...]

Thank you :)

GPU resting

image
There is no load on the GPU. This problem has happened since Python, when the OnnxRuntime libraries were loaded incorrectly

Support for Float16 Stable Diffusion Onnx models

I was trying to use this with the Microsoft Olive optimized Float16 Stable diffusion models and it was throwing an exception.
Microsoft.ML.OnnxRuntime.OnnxRuntimeException: '[ErrorCode:Fail] D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\ExecutionProvider.cpp(935)\onnxruntime.DLL!00007FFC53310120: (caller: 00007FFC5330BE42) Exception(1) tid(1e34) 80070057 The parameter is incorrect.

at line 
var results = await _onnxModelService.RunInferenceAsync(model, OnnxModelType.TextEncoder, inputs, outputs);

in function 
public async Task<float[]> EncodeTokensAsync(IModelOptions model, int[] tokenizedInput) 
in file PromptService.cs

The model I used is here
https://huggingface.co/softwareweaver/photon
It was converted to ONNX using the Microsoft Olive toolchain
https://github.com/microsoft/Olive

Thanks,
Ash

fails when loading model

[1/5/2024 8:57:40 AM] [Information] [ModelPickerControl] [LoadModel] - 'dreamlike' Loaded., Elapsed: 3ms
[1/5/2024 8:57:43 AM] [Information] [ModelPickerControl] [LoadModel] - 'dreamlike' Loading...
[1/5/2024 8:57:43 AM] [Error] [ModelPickerControl] An error occured while loading model 'dreamlike'
Microsoft.ML.OnnxRuntime.OnnxRuntimeException: [ErrorCode:RuntimeException] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\non-docker\OnnxStack_TensorRT\onnxruntime_providers_tensorrt.dll"

at Microsoft.ML.OnnxRuntime.SessionOptions.AppendExecutionProvider_Tensorrt(Int32 deviceId)
at OnnxStack.Core.Extensions.GetSessionOptions(OnnxModelConfig configuration)
at OnnxStack.Core.Model.OnnxModelSession..ctor(OnnxModelConfig configuration, PrePackedWeightsContainer container)
at OnnxStack.Core.Model.OnnxModelSet.<.ctor>b__3_1(OnnxModelConfig modelConfig)
at System.Collections.Immutable.ImmutableDictionary.<>c__DisplayClass9_03.<ToImmutableDictionary>b__0(TSource element) at System.Linq.Enumerable.SelectListIterator2.MoveNext()
at System.Collections.Immutable.ImmutableDictionary2.AddRange(IEnumerable1 items, MutationInput origin, KeyCollisionBehavior collisionBehavior)
at System.Collections.Immutable.ImmutableDictionary2.AddRange(IEnumerable1 pairs, Boolean avoidToHashMap)
at System.Collections.Immutable.ImmutableDictionary2.AddRange(IEnumerable1 pairs)
at System.Collections.Immutable.ImmutableDictionary.ToImmutableDictionary[TSource,TKey,TValue](IEnumerable1 source, Func2 keySelector, Func2 elementSelector, IEqualityComparer1 keyComparer, IEqualityComparer1 valueComparer) at System.Collections.Immutable.ImmutableDictionary.ToImmutableDictionary[TSource,TKey,TValue](IEnumerable1 source, Func2 keySelector, Func2 elementSelector)
at OnnxStack.Core.Model.OnnxModelSet..ctor(IOnnxModelSetConfig configuration)
at OnnxStack.Core.Services.OnnxModelService.LoadModelSet(IOnnxModelSetConfig modelSetConfig)
at OnnxStack.Core.Services.OnnxModelService.<>c__DisplayClass5_0.b__0()
at System.Threading.Tasks.Task`1.InnerInvoke()
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location ---
at OnnxStack.Core.Services.OnnxModelService.LoadModelAsync(IOnnxModelSetConfig model)
at OnnxStack.StableDiffusion.Services.StableDiffusionService.LoadModelAsync(IOnnxModelSetConfig model)
at OnnxStack.UI.UserControls.ModelPickerControl.LoadModel() in D:\Repositories\OnnxStack\OnnxStack.UI\UserControls\ModelPickerControl.xaml.cs:line 124
[1/5/2024 8:57:43 AM] [Information] [ModelPickerControl] [LoadModel] - 'dreamlike' Loaded., Elapsed: 3ms

I downloaded the huggingface repo in the readme and attempted to load, but get this in the logs.
I will note: this is looking for a file on the drive d, instead of a file in the download. I believe you may have fixed paths instead of relative paths in your repo.

sdxl-turbo can't run

onnx model: https://huggingface.co/stabilityai/sdxl-turbo
"TokenizerType": "Both",
"ModelType": "Base",
"PipelineType": "StableDiffusionXL",

[2023/12/20 8:33:04] [Information] [ModelPickerControl] [LoadModel] - 'SDXLTurbo' Loading...
[2023/12/20 8:36:12] [Information] [ModelPickerControl] [LoadModel] - 'SDXLTurbo' Loaded., Elapsed: 188.1343sec
[2023/12/20 8:37:20] [Information] [StableDiffusionXLDiffuser] [DiffuseAsync] - Diffuse starting...
[2023/12/20 8:37:20] [Information] [StableDiffusionXLDiffuser] [DiffuseAsync] - Model: SDXLTurbo, Pipeline: StableDiffusionXL, Diffuser: TextToImage, Scheduler: EulerAncestral
[2023/12/20 8:37:22] [Error] [TextToImageView] Error during Generate
System.AggregateException: One or more errors occurred. ([ErrorCode:InvalidArgument] Got invalid dimensions for input: time_ids for the following indices
index: 1 Got: 12 Expected: 6
Please fix either the inputs or the model.)
---> Microsoft.ML.OnnxRuntime.OnnxRuntimeException: [ErrorCode:InvalidArgument] Got invalid dimensions for input: time_ids for the following indices
index: 1 Got: 12 Expected: 6
Please fix either the inputs or the model.
at Microsoft.ML.OnnxRuntime.InferenceSession.<>c__DisplayClass75_0.b__0(IReadOnlyCollection1 outputs, IntPtr status) --- End of stack trace from previous location --- at Microsoft.ML.OnnxRuntime.InferenceSession.RunAsync(RunOptions options, IReadOnlyCollection1 inputNames, IReadOnlyCollection1 inputValues, IReadOnlyCollection1 outputNames, IReadOnlyCollection1 outputValues) at OnnxStack.StableDiffusion.Diffusers.StableDiffusionXL.StableDiffusionXLDiffuser.SchedulerStepAsync(StableDiffusionModelSet modelOptions, PromptOptions promptOptions, SchedulerOptions schedulerOptions, PromptEmbeddingsResult promptEmbeddings, Boolean performGuidance, Action2 progressCallback, CancellationToken cancellationToken)
at OnnxStack.StableDiffusion.Diffusers.DiffuserBase.DiffuseAsync(StableDiffusionModelSet modelOptions, PromptOptions promptOptions, SchedulerOptions schedulerOptions, Action2 progressCallback, CancellationToken cancellationToken) at OnnxStack.StableDiffusion.Services.StableDiffusionService.DiffuseAsync(StableDiffusionModelSet modelOptions, PromptOptions promptOptions, SchedulerOptions schedulerOptions, Action2 progress, CancellationToken cancellationToken)
at OnnxStack.StableDiffusion.Services.StableDiffusionService.GenerateAsync(StableDiffusionModelSet model, PromptOptions prompt, SchedulerOptions options, Action2 progressCallback, CancellationToken cancellationToken) --- End of inner exception stack trace --- at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification)
at OnnxStack.StableDiffusion.Services.StableDiffusionService.<>c.b__15_0(Task1 t) at System.Threading.Tasks.ContinuationResultTaskFromResultTask2.InnerInvoke()
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location ---
at OnnxStack.StableDiffusion.Services.StableDiffusionService.GenerateAsBytesAsync(StableDiffusionModelSet model, PromptOptions prompt, SchedulerOptions options, Action`2 progressCallback, CancellationToken cancellationToken)
at OnnxStack.UI.Views.TextToImageView.ExecuteStableDiffusion(StableDiffusionModelSet modelOptions, PromptOptions promptOptions, SchedulerOptions schedulerOptions, BatchOptions batchOptions)+MoveNext() in D:\Repositories\OnnxStack\OnnxStack.UI\Views\TextToImageView.xaml.cs:line 308
at OnnxStack.UI.Views.TextToImageView.ExecuteStableDiffusion(StableDiffusionModelSet modelOptions, PromptOptions promptOptions, SchedulerOptions schedulerOptions, BatchOptions batchOptions)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult()
at OnnxStack.UI.Views.TextToImageView.Generate() in D:\Repositories\OnnxStack\OnnxStack.UI\Views\TextToImageView.xaml.cs:line 193
at OnnxStack.UI.Views.TextToImageView.Generate() in D:\Repositories\OnnxStack\OnnxStack.UI\Views\TextToImageView.xaml.cs:line 193

dotnet build fails on Linux citing missing build target Microsoft.NET.Sdk.WindowsDesktop.targets

Hi there, great project and awesome to see you're building on the great work of the C# Onnx Runtime API working to make generative AI more accessible to C# developers, so we don't have to endlessly wade through lots of python code written hastily by academic researchers!

I was keen to see if I can play around with this project and figure out how to add support for other ONNX based models as it seems like the format is really gaining some traction these days. However, I'm running Ubuntu 23.04 and after cloning the project and attempting to run dotnet build I get the following compilation error:

dotnet build

Welcome to .NET 7.0!
---------------------
SDK Version: 7.0.113

----------------
Installed an ASP.NET Core HTTPS development certificate.
To trust the certificate run 'dotnet dev-certs https --trust' (Windows and macOS only).
Learn about HTTPS: https://aka.ms/dotnet-https
----------------
Write your first app: https://aka.ms/dotnet-hello-world
Find out what's new: https://aka.ms/dotnet-whats-new
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli
--------------------------------------------------------------------------------------
MSBuild version 17.4.8+6918b863a for .NET
  Determining projects to restore...
  All projects are up-to-date for restore.
/usr/lib/dotnet/sdk/7.0.113/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(1226,3): error MSB4019: The imported project "/usr/lib/dotnet/sdk/7.0.113/Sdks/Microsoft.NET.Sdk.WindowsDesktop/targets/Microsoft.NET.Sdk.WindowsDesktop.targets" was not found. Confirm that the expression in the Import declaration ";/usr/lib/dotnet/sdk/7.0.113/Sdks/Microsoft.NET.Sdk/targets/../../Microsoft.NET.Sdk.WindowsDesktop/targets/Microsoft.NET.Sdk.WindowsDesktop.targets" is correct, and that the file exists on disk. [/home/yolo/source/OnnxStack/OnnxStack.UI/OnnxStack.UI.csproj]
  OnnxStack.Core -> /home/yolo/source/OnnxStack/OnnxStack.Core/bin/Debug/net7.0/OnnxStack.Core.dll
  OnnxStack.StableDiffusion -> /home/yolo/source/OnnxStack/OnnxStack.StableDiffusion/bin/Debug/net7.0/OnnxStack.StableDiffusion.dll
  OnnxStack.Console -> /home/yolo/source/OnnxStack/OnnxStack.Console/bin/Debug/net7.0/OnnxStack.Console.dll
  OnnxStack.WebUI -> /home/yolo/source/OnnxStack/OnnxStack.WebUI/bin/Debug/net7.0/OnnxStack.WebUI.dll

Build FAILED.

/usr/lib/dotnet/sdk/7.0.113/Sdks/Microsoft.NET.Sdk/targets/Microsoft.NET.Sdk.targets(1226,3): error MSB4019: The imported project "/usr/lib/dotnet/sdk/7.0.113/Sdks/Microsoft.NET.Sdk.WindowsDesktop/targets/Microsoft.NET.Sdk.WindowsDesktop.targets" was not found. Confirm that the expression in the Import declaration ";/usr/lib/dotnet/sdk/7.0.113/Sdks/Microsoft.NET.Sdk/targets/../../Microsoft.NET.Sdk.WindowsDesktop/targets/Microsoft.NET.Sdk.WindowsDesktop.targets" is correct, and that the file exists on disk. [/home/yolo/source/OnnxStack/OnnxStack.UI/OnnxStack.UI.csproj]
    0 Warning(s)
    1 Error(s)

 Time Elapsed 00:00:02.90

I guess it makes sense the WindowsDesktop target is not available on a linux distribution, but it's a shame that's preventing me from compiling it.

Please provide instruction how to run it.

r@mini OnnxStack % ls
Assets                          OnnxStack.Console               OnnxStack.StableDiffusion       README.md
LICENSE                         OnnxStack.Core                  OnnxStack.sln
r@mini OnnxStack % dotnet run --project OnnxStack.Console/OnnxStack.Console.csproj
Unhandled exception. System.IO.FileLoadException: Could not load file or assembly 'OnnxStack.Console, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'.
r@mini OnnxStack % dotnet --version
7.0.308
r@mini OnnxStack % 

dotnet build was succesful

Run fail on Unit Test (StableDiffusionTests.cs)

Hi ,

There are some errors on StableDiffusionTests.cs.

When running unit test code. it will run fail.
v0.10.0 can build successfully. But v0.12.0 and later are not.

Commit: main branch(3c12247)
Workaround : #90

Error message:

  OnnxStack.StableDiffusion -> C:\Users\kimi0\Desktop\ONNX\OnnxStack\OnnxStack.StableDiffusion\bin\Debug\net7.0\OnnxStack.StableDiffusion.dll
C:\Users\kimi0\Desktop\ONNX\OnnxStack\OnnxStack.IntegrationTests\StableDiffusionTests.cs(43,38): error CS1061: 'IStableDiffusionService' does not contain a definition for 'Models' and no accessible extension method 'Mode
ls' accepting a first argument of type 'IStableDiffusionService' could be found (are you missing a using directive or an assembly reference?) [C:\Users\kimi0\Desktop\ONNX\OnnxStack\OnnxStack.IntegrationTests\OnnxStack.In 
tegrationTests.csproj]
C:\Users\kimi0\Desktop\ONNX\OnnxStack\OnnxStack.IntegrationTests\StableDiffusionTests.cs(61,38): error CS1061: 'IStableDiffusionService' does not contain a definition for 'Models' and no accessible extension method 'Mode 
ls' accepting a first argument of type 'IStableDiffusionService' could be found (are you missing a using directive or an assembly reference?) [C:\Users\kimi0\Desktop\ONNX\OnnxStack\OnnxStack.IntegrationTests\OnnxStack.In 
tegrationTests.csproj]

Build FAILED.

C:\Users\kimi0\Desktop\ONNX\OnnxStack\OnnxStack.IntegrationTests\StableDiffusionTests.cs(43,38): error CS1061: 'IStableDiffusionService' does not contain a definition for 'Models' and no accessible extension method 'Mode 
ls' accepting a first argument of type 'IStableDiffusionService' could be found (are you missing a using directive or an assembly reference?) [C:\Users\kimi0\Desktop\ONNX\OnnxStack\OnnxStack.IntegrationTests\OnnxStack.In 
tegrationTests.csproj]
C:\Users\kimi0\Desktop\ONNX\OnnxStack\OnnxStack.IntegrationTests\StableDiffusionTests.cs(61,38): error CS1061: 'IStableDiffusionService' does not contain a definition for 'Models' and no accessible extension method 'Mode 
ls' accepting a first argument of type 'IStableDiffusionService' could be found (are you missing a using directive or an assembly reference?) [C:\Users\kimi0\Desktop\ONNX\OnnxStack\OnnxStack.IntegrationTests\OnnxStack.In 
tegrationTests.csproj]
    0 Warning(s)
    2 Error(s)

By the way. For v0.12.0 rewrite StableDiffusionTests.cs can work.

diff --git a/OnnxStack.IntegrationTests/StableDiffusionTests.cs b/OnnxStack.IntegrationTests/StableDiffusionTests.cs
index 73d0d9f..d6b3739 100644
--- a/OnnxStack.IntegrationTests/StableDiffusionTests.cs
+++ b/OnnxStack.IntegrationTests/StableDiffusionTests.cs
@@ -40,7 +40,7 @@ public class StableDiffusionTests
     public async Task GivenAStableDiffusionModel_WhenLoadModel_ThenModelIsLoaded(string modelName)
     {
         //arrange
-        var model = _stableDiffusion.Models.Single(m => m.Name == modelName);
+        var model = _stableDiffusion.ModelSets.Single(m => m.Name == modelName);
 
         //act
         _logger.LogInformation("Attempting to load model {0}", model.Name);
@@ -58,7 +58,7 @@ public class StableDiffusionTests
 
     {
         //arrange
-        var model = _stableDiffusion.Models.Single(m => m.Name == modelName);
+        var model = _stableDiffusion.ModelSets.Single(m => m.Name == modelName);
         _logger.LogInformation("Attempting to load model: {0}", model.Name);
         await _stableDiffusion.LoadModelAsync(model);

Thank!

Feature Suggestion: Support for Custom Scheduler Implementation

Hi,
Firstly, nice project! It's great being able to do AI operations from a C# context.

For a project I'm working on I'd like to have low-level control over how the image is generated. From what I understand about how StableDiffusion works, writing my own Scheduler is probably the right approach.
From reading the source code, I see that the Scheduler classes are fairly well abstracted and have a shared IScheduler interface.
I'd like to write my own IScheduler implementation and have the system use it.

However it doesn't look like there is a way to insert a custom Scheduler into the system. The scheduler implementation is chosen based on SchedulerType enum with a set of hard-coded values.
To resolve this I am currently forking the repo and modifying the source-code directly, but it would be better if I didn't need to fork & maintain my own version of the code.

Could you add a way to inject custom Scheduler implementations?
If you like I could make a pull request for this, but I wanted to check whether you are open to the idea first.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.