Giter VIP home page Giter VIP logo

openai-dotnet's Introduction

OpenAI-DotNet

Discord NuGet version (OpenAI-DotNet) NuGet version (OpenAI-DotNet-Proxy) Nuget Publish

A simple C# .NET client library for OpenAI to use though their RESTful API. Independently developed, this is not an official library and I am not affiliated with OpenAI. An OpenAI API account is required.

Forked from OpenAI-API-dotnet. More context on Roger Pincombe's blog.

Requirements

  • This library targets .NET 6.0 and above.
  • It should work across console apps, winforms, wpf, asp.net, etc.
  • It should also work across Windows, Linux, and Mac.

Getting started

Install from NuGet

Install package OpenAI-DotNet from Nuget. Here's how via command line:

powershell:

Install-Package OpenAI-DotNet

dotnet:

dotnet add package OpenAI-DotNet

Looking to use OpenAI-DotNet in the Unity Game Engine? Check out our unity package on OpenUPM:

openupm

Check out our new api docs!

https://rageagainstthepixel.github.io/OpenAI-DotNet 🆕

Table of Contents

There are 3 ways to provide your API keys, in order of precedence:

Warning

We recommended using the environment variables to load the API key instead of having it hard coded in your source. It is not recommended use this method in production, but only for accepting user credentials, local testing and quick start scenarios.

  1. Pass keys directly with constructor ⚠️
  2. Load key from configuration file
  3. Use System Environment Variables

Pass keys directly with constructor

Warning

We recommended using the environment variables to load the API key instead of having it hard coded in your source. It is not recommended use this method in production, but only for accepting user credentials, local testing and quick start scenarios.

using var api = new OpenAIClient("sk-apiKey");

Or create a OpenAIAuthentication object manually

using var api = new OpenAIClient(new OpenAIAuthentication("sk-apiKey", "org-yourOrganizationId", "proj_yourProjectId"));

Load key from configuration file

Attempts to load api keys from a configuration file, by default .openai in the current directory, optionally traversing up the directory tree or in the user's home directory.

To create a configuration file, create a new text file named .openai and containing the line:

Note

Organization and project id entries are optional.

Json format
{
  "apiKey": "sk-aaaabbbbbccccddddd",
  "organizationId": "org-yourOrganizationId",
  "projectId": "proj_yourProjectId"
}
Deprecated format
OPENAI_API_KEY=sk-aaaabbbbbccccddddd
OPENAI_ORGANIZATION_ID=org-yourOrganizationId
OPENAI_PROJECT_ID=proj_yourProjectId

You can also load the configuration file directly with known path by calling static methods in OpenAIAuthentication:

  • Loads the default .openai config in the specified directory:
using var api = new OpenAIClient(OpenAIAuthentication.LoadFromDirectory("path/to/your/directory"));
  • Loads the configuration file from a specific path. File does not need to be named .openai as long as it conforms to the json format:
using var api = new OpenAIClient(OpenAIAuthentication.LoadFromPath("path/to/your/file.json"));

Use System Environment Variables

Use your system's environment variables specify an api key and organization to use.

  • Use OPENAI_API_KEY for your api key.
  • Use OPENAI_ORGANIZATION_ID to specify an organization.
  • Use OPENAI_PROJECT_ID to specify a project.
using var api = new OpenAIClient(OpenAIAuthentication.LoadFromEnv());

Handling OpenAIClient and HttpClient Lifecycle

OpenAIClient implements IDisposable to manage the lifecycle of the resources it uses, including HttpClient. When you initialize OpenAIClient, it will create an internal HttpClient instance if one is not provided. This internal HttpClient is disposed of when OpenAIClient is disposed of. If you provide an external HttpClient instance to OpenAIClient, you are responsible for managing its disposal.

  • If OpenAIClient creates its own HttpClient, it will also take care of disposing it when you dispose OpenAIClient.
  • If an external HttpClient is passed to OpenAIClient, it will not be disposed of by OpenAIClient. You must manage the disposal of the HttpClient yourself.

Please ensure to appropriately dispose of OpenAIClient to release resources timely and to prevent any potential memory or resource leaks in your application.

Typical usage with an internal HttpClient:

using var api = new OpenAIClient();

Custom HttpClient (which you must dispose of yourself):

using var customHttpClient = new HttpClient();
// set custom http client properties here
var api = new OpenAIClient(client: customHttpClient);

You can also choose to use Microsoft's Azure OpenAI deployments as well.

You can find the required information in the Azure Playground by clicking the View Code button and view a URL like this:

https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions?api-version={api-version}
  • your-resource-name The name of your Azure OpenAI Resource.
  • deployment-id The deployment name you chose when you deployed the model.
  • api-version The API version to use for this operation. This follows the YYYY-MM-DD format.

To setup the client to use your deployment, you'll need to pass in OpenAIClientSettings into the client constructor.

var auth = new OpenAIAuthentication("sk-apiKey");
var settings = new OpenAIClientSettings(resourceName: "your-resource-name", deploymentId: "deployment-id", apiVersion: "api-version");
using var api = new OpenAIClient(auth, settings);

Authenticate with MSAL as usual and get access token, then use the access token when creating your OpenAIAuthentication. Then be sure to set useAzureActiveDirectory to true when creating your OpenAIClientSettings.

Tutorial: Desktop app that calls web APIs: Acquire a token

// get your access token using any of the MSAL methods
var accessToken = result.AccessToken;
var auth = new OpenAIAuthentication(accessToken);
var settings = new OpenAIClientSettings(resourceName: "your-resource", deploymentId: "deployment-id", apiVersion: "api-version", useActiveDirectoryAuthentication: true);
using var api = new OpenAIClient(auth, settings);

NuGet version (OpenAI-DotNet-Proxy)

Using either the OpenAI-DotNet or com.openai.unity packages directly in your front-end app may expose your API keys and other sensitive information. To mitigate this risk, it is recommended to set up an intermediate API that makes requests to OpenAI on behalf of your front-end app. This library can be utilized for both front-end and intermediary host configurations, ensuring secure communication with the OpenAI API.

Front End Example

In the front end example, you will need to securely authenticate your users using your preferred OAuth provider. Once the user is authenticated, exchange your custom auth token with your API key on the backend.

Follow these steps:

  1. Setup a new project using either the OpenAI-DotNet or com.openai.unity packages.
  2. Authenticate users with your OAuth provider.
  3. After successful authentication, create a new OpenAIAuthentication object and pass in the custom token with the prefix sess-.
  4. Create a new OpenAIClientSettings object and specify the domain where your intermediate API is located.
  5. Pass your new auth and settings objects to the OpenAIClient constructor when you create the client instance.

Here's an example of how to set up the front end:

var authToken = await LoginAsync();
var auth = new OpenAIAuthentication($"sess-{authToken}");
var settings = new OpenAIClientSettings(domain: "api.your-custom-domain.com");
using var api = new OpenAIClient(auth, settings);

This setup allows your front end application to securely communicate with your backend that will be using the OpenAI-DotNet-Proxy, which then forwards requests to the OpenAI API. This ensures that your OpenAI API keys and other sensitive information remain secure throughout the process.

Back End Example

In this example, we demonstrate how to set up and use OpenAIProxy in a new ASP.NET Core web app. The proxy server will handle authentication and forward requests to the OpenAI API, ensuring that your API keys and other sensitive information remain secure.

  1. Create a new ASP.NET Core minimal web API project.
  2. Add the OpenAI-DotNet nuget package to your project.
    • Powershell install: Install-Package OpenAI-DotNet-Proxy
    • Dotnet install: dotnet add package OpenAI-DotNet-Proxy
    • Manually editing .csproj: <PackageReference Include="OpenAI-DotNet-Proxy" />
  3. Create a new class that inherits from AbstractAuthenticationFilter and override the ValidateAuthentication method. This will implement the IAuthenticationFilter that you will use to check user session token against your internal server.
  4. In Program.cs, create a new proxy web application by calling OpenAIProxy.CreateWebApplication method, passing your custom AuthenticationFilter as a type argument.
  5. Create OpenAIAuthentication and OpenAIClientSettings as you would normally with your API keys, org id, or Azure settings.
public partial class Program
{
    private class AuthenticationFilter : AbstractAuthenticationFilter
    {
        public override async Task ValidateAuthenticationAsync(IHeaderDictionary request)
        {
            await Task.CompletedTask; // remote resource call to verify token

            // You will need to implement your own class to properly test
            // custom issued tokens you've setup for your end users.
            if (!request.Authorization.ToString().Contains(TestUserToken))
            {
                throw new AuthenticationException("User is not authorized");
            }
        }
    }

    public static void Main(string[] args)
    {
        var auth = OpenAIAuthentication.LoadFromEnv();
        var settings = new OpenAIClientSettings(/* your custom settings if using Azure OpenAI */);
        using var openAIClient = new OpenAIClient(auth, settings);
        OpenAIProxy.CreateWebApplication<AuthenticationFilter>(args, openAIClient).Run();
    }
}

Once you have set up your proxy server, your end users can now make authenticated requests to your proxy api instead of directly to the OpenAI API. The proxy server will handle authentication and forward requests to the OpenAI API, ensuring that your API keys and other sensitive information remain secure.

List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.

Also checkout model endpoint compatibility to understand which models work with which endpoints.

To specify a custom model not pre-defined in this library:

var model = new Model("model-id");

The Models API is accessed via OpenAIClient.ModelsEndpoint

Lists the currently available models, and provides basic information about each one such as the owner and availability.

using var api = new OpenAIClient();
var models = await api.ModelsEndpoint.GetModelsAsync();

foreach (var model in models)
{
    Console.WriteLine(model.ToString());
}

Retrieves a model instance, providing basic information about the model such as the owner and permissions.

using var api = new OpenAIClient();
var model = await api.ModelsEndpoint.GetModelDetailsAsync("gpt-4o");
Console.WriteLine(model.ToString());

Delete a fine-tuned model. You must have the Owner role in your organization.

using var api = new OpenAIClient();
var isDeleted = await api.ModelsEndpoint.DeleteFineTuneModelAsync("your-fine-tuned-model");
Assert.IsTrue(isDeleted);

Warning

Beta Feature. API subject to breaking changes.

Build assistants that can call models and use tools to perform tasks.

The Assistants API is accessed via OpenAIClient.AssistantsEndpoint

Returns a list of assistants.

using var api = new OpenAIClient();
var assistantsList = await api.AssistantsEndpoint.ListAssistantsAsync();

foreach (var assistant in assistantsList.Items)
{
    Console.WriteLine($"{assistant} -> {assistant.CreatedAt}");
}

Create an assistant with a model and instructions.

using var api = new OpenAIClient();
var request = new CreateAssistantRequest(Model.GPT4o);
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(request);

Retrieves an assistant.

using var api = new OpenAIClient();
var assistant = await api.AssistantsEndpoint.RetrieveAssistantAsync("assistant-id");
Console.WriteLine($"{assistant} -> {assistant.CreatedAt}");

Modifies an assistant.

using var api = new OpenAIClient();
var createRequest = new CreateAssistantRequest(Model.GPT4_Turbo);
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(createRequest);
var modifyRequest = new CreateAssistantRequest(Model.GPT4o);
var modifiedAssistant = await api.AssistantsEndpoint.ModifyAssistantAsync(assistant.Id, modifyRequest);
// OR AssistantExtension for easier use!
var modifiedAssistantEx = await assistant.ModifyAsync(modifyRequest);

Delete an assistant.

using var api = new OpenAIClient();
var isDeleted = await api.AssistantsEndpoint.DeleteAssistantAsync("assistant-id");
// OR AssistantExtension for easier use!
var isDeleted = await assistant.DeleteAsync();
Assert.IsTrue(isDeleted);

Note

Assistant stream events can be easily added to existing thread calls by passing Func<IServerSentEvent, Task> streamEventHandler callback to any existing method that supports streaming.

Create Threads that Assistants can interact with.

The Threads API is accessed via OpenAIClient.ThreadsEndpoint

Create a thread.

using var api = new OpenAIClient();
var thread = await api.ThreadsEndpoint.CreateThreadAsync();
Console.WriteLine($"Create thread {thread.Id} -> {thread.CreatedAt}");

Create a thread and run it in one request.

See also: Thread Runs

using var api = new OpenAIClient();
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(
    new CreateAssistantRequest(
        name: "Math Tutor",
        instructions: "You are a personal math tutor. Answer questions briefly, in a sentence or less.",
        model: Model.GPT4o));
var messages = new List<Message> { "I need to solve the equation `3x + 11 = 14`. Can you help me?" };
var threadRequest = new CreateThreadRequest(messages);
var run = await assistant.CreateThreadAndRunAsync(threadRequest);
Console.WriteLine($"Created thread and run: {run.ThreadId} -> {run.Id} -> {run.CreatedAt}");
Create Thread and Run Streaming

Create a thread and run it in one request while streaming events.

using var api = new OpenAIClient();
var tools = new List<Tool>
{
    Tool.GetOrCreateTool(typeof(WeatherService), nameof(WeatherService.GetCurrentWeatherAsync))
};
var assistantRequest = new CreateAssistantRequest(tools: tools, instructions: "You are a helpful weather assistant. Use the appropriate unit based on geographical location.");
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(assistantRequest);
ThreadResponse thread = null;
async Task StreamEventHandler(IServerSentEvent streamEvent)
{
    switch (streamEvent)
    {
        case ThreadResponse threadResponse:
            thread = threadResponse;
            break;
        case RunResponse runResponse:
            if (runResponse.Status == RunStatus.RequiresAction)
            {
                var toolOutputs = await assistant.GetToolOutputsAsync(runResponse);

                foreach (var toolOutput in toolOutputs)
                {
                    Console.WriteLine($"Tool Output: {toolOutput}");
                }

                await runResponse.SubmitToolOutputsAsync(toolOutputs, StreamEventHandler);
            }
            break;
        default:
            Console.WriteLine(streamEvent.ToJsonString());
            break;
    }
}

var run = await assistant.CreateThreadAndRunAsync("I'm in Kuala-Lumpur, please tell me what's the temperature now?", StreamEventHandler);
run = await run.WaitForStatusChangeAsync();
var messages = await thread.ListMessagesAsync();
foreach (var response in messages.Items.Reverse())
{
    Console.WriteLine($"{response.Role}: {response.PrintContent()}");
}

Retrieves a thread.

using var api = new OpenAIClient();
var thread = await api.ThreadsEndpoint.RetrieveThreadAsync("thread-id");
// OR if you simply wish to get the latest state of a thread
thread = await thread.UpdateAsync();
Console.WriteLine($"Retrieve thread {thread.Id} -> {thread.CreatedAt}");

Modifies a thread.

Note: Only the metadata can be modified.

using var api = new OpenAIClient();
var thread = await api.ThreadsEndpoint.CreateThreadAsync();
var metadata = new Dictionary<string, string>
{
    { "key", "custom thread metadata" }
}
thread = await api.ThreadsEndpoint.ModifyThreadAsync(thread.Id, metadata);
// OR use extension method for convenience!
thread = await thread.ModifyAsync(metadata);
Console.WriteLine($"Modify thread {thread.Id} -> {thread.Metadata["key"]}");

Delete a thread.

using var api = new OpenAIClient();
var isDeleted = await api.ThreadsEndpoint.DeleteThreadAsync("thread-id");
// OR use extension method for convenience!
var isDeleted = await thread.DeleteAsync();
Assert.IsTrue(isDeleted);

Create messages within threads.

Returns a list of messages for a given thread.

using var api = new OpenAIClient();
var messageList = await api.ThreadsEndpoint.ListMessagesAsync("thread-id");
// OR use extension method for convenience!
var messageList = await thread.ListMessagesAsync();

foreach (var message in messageList.Items)
{
    Console.WriteLine($"{message.Id}: {message.Role}: {message.PrintContent()}");
}

Create a message.

using var api = new OpenAIClient();
var thread = await api.ThreadsEndpoint.CreateThreadAsync();
var request = new CreateMessageRequest("Hello world!");
var message = await api.ThreadsEndpoint.CreateMessageAsync(thread.Id, request);
// OR use extension method for convenience!
var message = await thread.CreateMessageAsync("Hello World!");
Console.WriteLine($"{message.Id}: {message.Role}: {message.PrintContent()}");

Retrieve a message.

using var api = new OpenAIClient();
var message = await api.ThreadsEndpoint.RetrieveMessageAsync("thread-id", "message-id");
// OR use extension methods for convenience!
var message = await thread.RetrieveMessageAsync("message-id");
var message = await message.UpdateAsync();
Console.WriteLine($"{message.Id}: {message.Role}: {message.PrintContent()}");

Modify a message.

Note: Only the message metadata can be modified.

using var api = new OpenAIClient();
var metadata = new Dictionary<string, string>
{
    { "key", "custom message metadata" }
};
var message = await api.ThreadsEndpoint.ModifyMessageAsync("thread-id", "message-id", metadata);
// OR use extension method for convenience!
var message = await message.ModifyAsync(metadata);
Console.WriteLine($"Modify message metadata: {message.Id} -> {message.Metadata["key"]}");

Represents an execution run on a thread.

Returns a list of runs belonging to a thread.

using var api = new OpenAIClient();
var runList = await api.ThreadsEndpoint.ListRunsAsync("thread-id");
// OR use extension method for convenience!
var runList = await thread.ListRunsAsync();

foreach (var run in runList.Items)
{
    Console.WriteLine($"[{run.Id}] {run.Status} | {run.CreatedAt}");
}

Create a run.

using var api = new OpenAIClient();
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(
    new CreateAssistantRequest(
        name: "Math Tutor",
        instructions: "You are a personal math tutor. Answer questions briefly, in a sentence or less.",
        model: Model.GPT4o));
var thread = await api.ThreadsEndpoint.CreateThreadAsync();
var message = await thread.CreateMessageAsync("I need to solve the equation `3x + 11 = 14`. Can you help me?");
var run = await thread.CreateRunAsync(assistant);
Console.WriteLine($"[{run.Id}] {run.Status} | {run.CreatedAt}");
Create Thread Run Streaming

Create a run and stream the events.

using var api = new OpenAIClient();
var assistant = await api.AssistantsEndpoint.CreateAssistantAsync(
    new CreateAssistantRequest(
        name: "Math Tutor",
        instructions: "You are a personal math tutor. Answer questions briefly, in a sentence or less. Your responses should be formatted in JSON.",
        model: Model.GPT4o,
        responseFormat: ChatResponseFormat.Json));
var thread = await api.ThreadsEndpoint.CreateThreadAsync();
var message = await thread.CreateMessageAsync("I need to solve the equation `3x + 11 = 14`. Can you help me?");
var run = await thread.CreateRunAsync(assistant, async streamEvent =>
{
    Console.WriteLine(streamEvent.ToJsonString());
    await Task.CompletedTask;
});
var messages = await thread.ListMessagesAsync();

foreach (var response in messages.Items.Reverse())
{
    Console.WriteLine($"{response.Role}: {response.PrintContent()}");
}

Retrieves a run.

using var api = new OpenAIClient();
var run = await api.ThreadsEndpoint.RetrieveRunAsync("thread-id", "run-id");
// OR use extension method for convenience!
var run = await thread.RetrieveRunAsync("run-id");
var run = await run.UpdateAsync();
Console.WriteLine($"[{run.Id}] {run.Status} | {run.CreatedAt}");

Modifies a run.

Note: Only the metadata can be modified.

using var api = new OpenAIClient();
var metadata = new Dictionary<string, string>
{
    { "key", "custom run metadata" }
};
var run = await api.ThreadsEndpoint.ModifyRunAsync("thread-id", "run-id", metadata);
// OR use extension method for convenience!
var run = await run.ModifyAsync(metadata);
Console.WriteLine($"Modify run {run.Id} -> {run.Metadata["key"]}");

When a run has the status: requires_action and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.

Note

See Create Thread and Run Streaming example on how to stream tool output events.

using var api = new OpenAIClient();
var tools = new List<Tool>
{
    // Use a predefined tool
    Tool.Retrieval, Tool.CodeInterpreter,
    // Or create a tool from a type and the name of the method you want to use for function calling
    Tool.GetOrCreateTool(typeof(WeatherService), nameof(WeatherService.GetCurrentWeatherAsync)),
    // Pass in an instance of an object to call a method on it
    Tool.GetOrCreateTool(api.ImagesEndPoint, nameof(ImagesEndpoint.GenerateImageAsync)),
    // Define func<,> callbacks
    Tool.FromFunc("name_of_func", () => { /* callback function */ }),
    Tool.FromFunc<T1,T2,TResult>("func_with_multiple_params", (t1, t2) => { /* logic that calculates return value */ return tResult; })
};
var assistantRequest = new CreateAssistantRequest(tools: tools, instructions: "You are a helpful weather assistant. Use the appropriate unit based on geographical location.");
var testAssistant = await api.AssistantsEndpoint.CreateAssistantAsync(assistantRequest);
var run = await testAssistant.CreateThreadAndRunAsync("I'm in Kuala-Lumpur, please tell me what's the temperature now?");
// waiting while run is Queued and InProgress
run = await run.WaitForStatusChangeAsync();

// Invoke all of the tool call functions and return the tool outputs.
var toolOutputs = await testAssistant.GetToolOutputsAsync(run.RequiredAction.SubmitToolOutputs.ToolCalls);

foreach (var toolOutput in toolOutputs)
{
    Console.WriteLine($"tool call output: {toolOutput.Output}");
}
// submit the tool outputs
run = await run.SubmitToolOutputsAsync(toolOutputs);
// waiting while run in Queued and InProgress
run = await run.WaitForStatusChangeAsync();
var messages = await run.ListMessagesAsync();

foreach (var message in messages.Items.OrderBy(response => response.CreatedAt))
{
    Console.WriteLine($"{message.Role}: {message.PrintContent()}");
}

Structured Outputs is the evolution of JSON mode. While both ensure valid JSON is produced, only Structured Outputs ensure schema adherence.

Important

  • When using JSON mode, always instruct the model to produce JSON via some message in the conversation, for example via your system message. If you don't include an explicit instruction to generate JSON, the model may generate an unending stream of whitespace and the request may run continually until it reaches the token limit. To help ensure you don't forget, the API will throw an error if the string "JSON" does not appear somewhere in the context.
  • The JSON in the message the model returns may be partial (i.e. cut off) if finish_reason is length, which indicates the generation exceeded max_tokens or the conversation exceeded the token limit. To guard against this, check finish_reason before parsing the response.

First define the structure of your responses. These will be used as your schema. These are the objects you'll deserialize to, so be sure to use standard Json object models.

public class MathResponse
{
    [JsonInclude]
    [JsonPropertyName("steps")]
    public IReadOnlyList<MathStep> Steps { get; private set; }

    [JsonInclude]
    [JsonPropertyName("final_answer")]
    public string FinalAnswer { get; private set; }
}

public class MathStep
{
    [JsonInclude]
    [JsonPropertyName("explanation")]
    public string Explanation { get; private set; }

    [JsonInclude]
    [JsonPropertyName("output")]
    public string Output { get; private set; }
}

To use, simply specify the MathResponse type as a generic constraint in either CreateAssistantAsync, CreateRunAsync, or CreateThreadAndRunAsync.

var assistant = await OpenAIClient.AssistantsEndpoint.CreateAssistantAsync<MathResponse>(
    new CreateAssistantRequest(
        name: "Math Tutor",
        instructions: "You are a helpful math tutor. Guide the user through the solution step by step.",
        model: "gpt-4o-2024-08-06"));
ThreadResponse thread = null;

try
{
    async Task StreamEventHandler(IServerSentEvent @event)
    {
        try
        {
            switch (@event)
            {
                case MessageResponse message:
                    if (message.Status != MessageStatus.Completed)
                    {
                        Console.WriteLine(@event.ToJsonString());
                        break;
                    }

                    var mathResponse = message.FromSchema<MathResponse>();

                    for (var i = 0; i < mathResponse.Steps.Count; i++)
                    {
                        var step = mathResponse.Steps[i];
                        Console.WriteLine($"Step {i}: {step.Explanation}");
                        Console.WriteLine($"Result: {step.Output}");
                    }

                    Console.WriteLine($"Final Answer: {mathResponse.FinalAnswer}");
                    break;
                default:
                    Console.WriteLine(@event.ToJsonString());
                    break;
            }
        }
        catch (Exception e)
        {
            Console.WriteLine(e);
            throw;
        }

        await Task.CompletedTask;
    }

    var run = await assistant.CreateThreadAndRunAsync("how can I solve 8x + 7 = -23", StreamEventHandler);
    thread = await run.GetThreadAsync();
    run = await run.WaitForStatusChangeAsync();
    Console.WriteLine($"Created thread and run: {run.ThreadId} -> {run.Id} -> {run.CreatedAt}");
    var messages = await thread.ListMessagesAsync();

    foreach (var response in messages.Items.OrderBy(response => response.CreatedAt))
    {
        Console.WriteLine($"{response.Role}: {response.PrintContent()}");
    }
}
finally
{
    await assistant.DeleteAsync(deleteToolResources: thread == null);

    if (thread != null)
    {
        var isDeleted = await thread.DeleteAsync(deleteToolResources: true);
    }
}

Returns a list of run steps belonging to a run.

using var api = new OpenAIClient();
var runStepList = await api.ThreadsEndpoint.ListRunStepsAsync("thread-id", "run-id");
// OR use extension method for convenience!
var runStepList = await run.ListRunStepsAsync();

foreach (var runStep in runStepList.Items)
{
    Console.WriteLine($"[{runStep.Id}] {runStep.Status} {runStep.CreatedAt} -> {runStep.ExpiresAt}");
}

Retrieves a run step.

using var api = new OpenAIClient();
var runStep = await api.ThreadsEndpoint.RetrieveRunStepAsync("thread-id", "run-id", "step-id");
// OR use extension method for convenience!
var runStep = await run.RetrieveRunStepAsync("step-id");
var runStep = await runStep.UpdateAsync();
Console.WriteLine($"[{runStep.Id}] {runStep.Status} {runStep.CreatedAt} -> {runStep.ExpiresAt}");

Cancels a run that is in_progress.

using var api = new OpenAIClient();
var isCancelled = await api.ThreadsEndpoint.CancelRunAsync("thread-id", "run-id");
// OR use extension method for convenience!
var isCancelled = await run.CancelAsync();
Assert.IsTrue(isCancelled);

Vector stores are used to store files for use by the file_search tool.

The Vector Stores API is accessed via OpenAIClient.VectorStoresEndpoint

Returns a list of vector stores.

using var api = new OpenAIClient();
var vectorStores = await OpenAIClient.VectorStoresEndpoint.ListVectorStoresAsync();

foreach (var vectorStore in vectorStores.Items)
{
    Console.WriteLine(vectorStore);
}

Create a vector store.

using var api = new OpenAIClient();
var createVectorStoreRequest = new CreateVectorStoreRequest("test-vector-store");
var vectorStore = await api.VectorStoresEndpoint.CreateVectorStoreAsync(createVectorStoreRequest);
Console.WriteLine(vectorStore);

Retrieves a vector store.

using var api = new OpenAIClient();
var vectorStore = await api.VectorStoresEndpoint.GetVectorStoreAsync("vector-store-id");
Console.WriteLine(vectorStore);

Modifies a vector store.

using var api = new OpenAIClient();
var metadata = new Dictionary<string, object> { { "Test", DateTime.UtcNow } };
var vectorStore = await api.VectorStoresEndpoint.ModifyVectorStoreAsync("vector-store-id", metadata: metadata);
Console.WriteLine(vectorStore);

Delete a vector store.

using var api = new OpenAIClient();
var isDeleted = await api.VectorStoresEndpoint.DeleteVectorStoreAsync("vector-store-id");
Assert.IsTrue(isDeleted);

Vector store files represent files inside a vector store.

Returns a list of vector store files.

using var api = new OpenAIClient();
var files = await api.VectorStoresEndpoint.ListVectorStoreFilesAsync("vector-store-id");

foreach (var file in vectorStoreFiles.Items)
{
    Console.WriteLine(file);
}

Create a vector store file by attaching a file to a vector store.

using var api = new OpenAIClient();
var file = await api.VectorStoresEndpoint.CreateVectorStoreFileAsync("vector-store-id", "file-id", new ChunkingStrategy(ChunkingStrategyType.Static));
Console.WriteLine(file);

Retrieves a vector store file.

using var api = new OpenAIClient();
var file = await api.VectorStoresEndpoint.GetVectorStoreFileAsync("vector-store-id", "vector-store-file-id");
Console.WriteLine(file);

Delete a vector store file. This will remove the file from the vector store but the file itself will not be deleted. To delete the file, use the delete file endpoint.

using var api = new OpenAIClient();
var isDeleted = await api.VectorStoresEndpoint.DeleteVectorStoreFileAsync("vector-store-id", vectorStoreFile);
Assert.IsTrue(isDeleted);

Vector store files represent files inside a vector store.

Create a vector store file batch.

using var api = new OpenAIClient();
var files = new List<string> { "file_id_1","file_id_2" };
var vectorStoreFileBatch = await api.VectorStoresEndpoint.CreateVectorStoreFileBatchAsync("vector-store-id", files);
Console.WriteLine(vectorStoreFileBatch);

Retrieves a vector store file batch.

using var api = new OpenAIClient();
var vectorStoreFileBatch = await api.VectorStoresEndpoint.GetVectorStoreFileBatchAsync("vector-store-id", "vector-store-file-batch-id");
// you can also use convenience methods!
vectorStoreFileBatch = await vectorStoreFileBatch.UpdateAsync();
vectorStoreFileBatch = await vectorStoreFileBatch.WaitForStatusChangeAsync();

Returns a list of vector store files in a batch.

using var api = new OpenAIClient();
var files = await api.VectorStoresEndpoint.ListVectorStoreBatchFilesAsync("vector-store-id", "vector-store-file-batch-id");

foreach (var file in files.Items)
{
    Console.WriteLine(file);
}

Cancel a vector store file batch. This attempts to cancel the processing of files in this batch as soon as possible.

using var api = new OpenAIClient();
var isCancelled = await api.VectorStoresEndpoint.CancelVectorStoreFileBatchAsync("vector-store-id", "vector-store-file-batch-id");

Given a chat conversation, the model will return a chat completion response.

The Chat API is accessed via OpenAIClient.ChatEndpoint

Creates a completion for the chat message

using var api = new OpenAIClient();
var messages = new List<Message>
{
    new Message(Role.System, "You are a helpful assistant."),
    new Message(Role.User, "Who won the world series in 2020?"),
    new Message(Role.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
    new Message(Role.User, "Where was it played?"),
};
var chatRequest = new ChatRequest(messages, Model.GPT4o);
var response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);
var choice = response.FirstChoice;
Console.WriteLine($"[{choice.Index}] {choice.Message.Role}: {choice.Message} | Finish Reason: {choice.FinishReason}");
using var api = new OpenAIClient();
var messages = new List<Message>
{
    new Message(Role.System, "You are a helpful assistant."),
    new Message(Role.User, "Who won the world series in 2020?"),
    new Message(Role.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
    new Message(Role.User, "Where was it played?"),
};
var chatRequest = new ChatRequest(messages);
var response = await api.ChatEndpoint.StreamCompletionAsync(chatRequest, async partialResponse =>
{
    Console.Write(partialResponse.FirstChoice.Delta.ToString());
    await Task.CompletedTask;
});
var choice = response.FirstChoice;
Console.WriteLine($"[{choice.Index}] {choice.Message.Role}: {choice.Message} | Finish Reason: {choice.FinishReason}");

Or if using IAsyncEnumerable{T} (C# 8.0+)

using var api = new OpenAIClient();
var messages = new List<Message>
{
    new Message(Role.System, "You are a helpful assistant."),
    new Message(Role.User, "Who won the world series in 2020?"),
    new Message(Role.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
    new Message(Role.User, "Where was it played?"),
};
var cumulativeDelta = string.Empty;
var chatRequest = new ChatRequest(messages);
await foreach (var partialResponse in OpenAIClient.ChatEndpoint.StreamCompletionEnumerableAsync(chatRequest))
{
    foreach (var choice in partialResponse.Choices.Where(choice => choice.Delta?.Content != null))
    {
        cumulativeDelta += choice.Delta.Content;
    }
}

Console.WriteLine(cumulativeDelta);
using var api = new OpenAIClient();
var messages = new List<Message>
{
    new(Role.System, "You are a helpful weather assistant. Always prompt the user for their location."),
    new Message(Role.User, "What's the weather like today?"),
};

foreach (var message in messages)
{
    Console.WriteLine($"{message.Role}: {message}");
}

// Define the tools that the assistant is able to use:
// 1. Get a list of all the static methods decorated with FunctionAttribute
var tools = Tool.GetAllAvailableTools(includeDefaults: false, forceUpdate: true, clearCache: true);
// 2. Define a custom list of tools:
var tools = new List<Tool>
{
    Tool.GetOrCreateTool(objectInstance, "TheNameOfTheMethodToCall"),
    Tool.FromFunc("a_custom_name_for_your_function", ()=> { /* Some logic to run */ })
};
var chatRequest = new ChatRequest(messages, tools: tools, toolChoice: "auto");
var response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);
messages.Add(response.FirstChoice.Message);

Console.WriteLine($"{response.FirstChoice.Message.Role}: {response.FirstChoice} | Finish Reason: {response.FirstChoice.FinishReason}");

var locationMessage = new Message(Role.User, "I'm in Glasgow, Scotland");
messages.Add(locationMessage);
Console.WriteLine($"{locationMessage.Role}: {locationMessage.Content}");
chatRequest = new ChatRequest(messages, tools: tools, toolChoice: "auto");
response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);

messages.Add(response.FirstChoice.Message);

if (response.FirstChoice.FinishReason == "stop")
{
    Console.WriteLine($"{response.FirstChoice.Message.Role}: {response.FirstChoice} | Finish Reason: {response.FirstChoice.FinishReason}");

    var unitMessage = new Message(Role.User, "Fahrenheit");
    messages.Add(unitMessage);
    Console.WriteLine($"{unitMessage.Role}: {unitMessage.Content}");
    chatRequest = new ChatRequest(messages, tools: tools, toolChoice: "auto");
    response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);
}

// iterate over all tool calls and invoke them
foreach (var toolCall in response.FirstChoice.Message.ToolCalls)
{
    Console.WriteLine($"{response.FirstChoice.Message.Role}: {toolCall.Function.Name} | Finish Reason: {response.FirstChoice.FinishReason}");
    Console.WriteLine($"{toolCall.Function.Arguments}");
    // Invokes function to get a generic json result to return for tool call.
    var functionResult = await toolCall.InvokeFunctionAsync();
    // If you know the return type and do additional processing you can use generic overload
    var functionResult = await toolCall.InvokeFunctionAsync<string>();
    messages.Add(new Message(toolCall, functionResult));
    Console.WriteLine($"{Role.Tool}: {functionResult}");
}
// System: You are a helpful weather assistant.
// User: What's the weather like today?
// Assistant: Sure, may I know your current location? | Finish Reason: stop
// User: I'm in Glasgow, Scotland
// Assistant: GetCurrentWeather | Finish Reason: tool_calls
// {
//   "location": "Glasgow, Scotland",
//   "unit": "celsius"
// }
// Tool: The current weather in Glasgow, Scotland is 39°C.

Warning

Beta Feature. API subject to breaking changes.

using var api = new OpenAIClient();
var messages = new List<Message>
{
    new Message(Role.System, "You are a helpful assistant."),
    new Message(Role.User, new List<Content>
    {
        "What's in this image?",
        new ImageUrl("https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg", ImageDetail.Low)
    })
};
var chatRequest = new ChatRequest(messages, model: Model.GPT4o);
var response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);
Console.WriteLine($"{response.FirstChoice.Message.Role}: {response.FirstChoice.Message.Content} | Finish Reason: {response.FirstChoice.FinishDetails}");

The evolution of Json Mode. While both ensure valid JSON is produced, only Structured Outputs ensure schema adherence.

Important

  • When using JSON mode, always instruct the model to produce JSON via some message in the conversation, for example via your system message. If you don't include an explicit instruction to generate JSON, the model may generate an unending stream of whitespace and the request may run continually until it reaches the token limit. To help ensure you don't forget, the API will throw an error if the string "JSON" does not appear somewhere in the context.
  • The JSON in the message the model returns may be partial (i.e. cut off) if finish_reason is length, which indicates the generation exceeded max_tokens or the conversation exceeded the token limit. To guard against this, check finish_reason before parsing the response.

First define the structure of your responses. These will be used as your schema. These are the objects you'll deserialize to, so be sure to use standard Json object models.

public class MathResponse
{
    [JsonInclude]
    [JsonPropertyName("steps")]
    public IReadOnlyList<MathStep> Steps { get; private set; }

    [JsonInclude]
    [JsonPropertyName("final_answer")]
    public string FinalAnswer { get; private set; }
}

public class MathStep
{
    [JsonInclude]
    [JsonPropertyName("explanation")]
    public string Explanation { get; private set; }

    [JsonInclude]
    [JsonPropertyName("output")]
    public string Output { get; private set; }
}

To use, simply specify the MathResponse type as a generic constraint when requesting a completion.

var messages = new List<Message>
{
    new(Role.System, "You are a helpful math tutor. Guide the user through the solution step by step."),
    new(Role.User, "how can I solve 8x + 7 = -23")
};

var chatRequest = new ChatRequest<MathResponse>(messages, model: new("gpt-4o-2024-08-06"));
var (mathResponse, chatResponse) = await OpenAIClient.ChatEndpoint.GetCompletionAsync<MathResponse>(chatRequest);

for (var i = 0; i < mathResponse.Steps.Count; i++)
{
    var step = mathResponse.Steps[i];
    Console.WriteLine($"Step {i}: {step.Explanation}");
    Console.WriteLine($"Result: {step.Output}");
}

Console.WriteLine($"Final Answer: {mathResponse.FinalAnswer}");
chatResponse.GetUsage();

Important

  • When using JSON mode, always instruct the model to produce JSON via some message in the conversation, for example via your system message. If you don't include an explicit instruction to generate JSON, the model may generate an unending stream of whitespace and the request may run continually until it reaches the token limit. To help ensure you don't forget, the API will throw an error if the string "JSON" does not appear somewhere in the context.
  • The JSON in the message the model returns may be partial (i.e. cut off) if finish_reason is length, which indicates the generation exceeded max_tokens or the conversation exceeded the token limit. To guard against this, check finish_reason before parsing the response.
  • JSON mode will not guarantee the output matches any specific schema, only that it is valid and parses without errors.
var messages = new List<Message>
{
    new Message(Role.System, "You are a helpful assistant designed to output JSON."),
    new Message(Role.User, "Who won the world series in 2020?"),
};
var chatRequest = new ChatRequest(messages, Model.GPT4o, responseFormat: ChatResponseFormat.Json);
var response = await api.ChatEndpoint.GetCompletionAsync(chatRequest);

foreach (var choice in response.Choices)
{
    Console.WriteLine($"[{choice.Index}] {choice.Message.Role}: {choice} | Finish Reason: {choice.FinishReason}");
}

response.GetUsage();

Converts audio into text.

The Audio API is accessed via OpenAIClient.AudioEndpoint

Generates audio from the input text.

using var api = new OpenAIClient();
var request = new SpeechRequest("Hello World!");
async Task ChunkCallback(ReadOnlyMemory<byte> chunkCallback)
{
    // TODO Implement audio playback as chunks arrive
    await Task.CompletedTask;
}

var response = await api.AudioEndpoint.CreateSpeechAsync(request, ChunkCallback);
await File.WriteAllBytesAsync("../../../Assets/HelloWorld.mp3", response.ToArray());

Transcribes audio into the input language.

using var api = new OpenAIClient();
using var request = new AudioTranscriptionRequest(Path.GetFullPath(audioAssetPath), language: "en");
var response = await api.AudioEndpoint.CreateTranscriptionTextAsync(request);
Console.WriteLine(response);

You can also get detailed information using verbose_json to get timestamp granularities:

using var api = new OpenAIClient();
using var request = new AudioTranscriptionRequest(transcriptionAudio, responseFormat: AudioResponseFormat.Verbose_Json, timestampGranularity: TimestampGranularity.Word, temperature: 0.1f, language: "en");
var response = await api.AudioEndpoint.CreateTranscriptionTextAsync(request);

foreach (var word in response.Words)
{
    Console.WriteLine($"[{word.Start}-{word.End}] \"{word.Word}\"");
}

Translates audio into into English.

using var api = new OpenAIClient();
using var request = new AudioTranslationRequest(Path.GetFullPath(audioAssetPath));
var response = await api.AudioEndpoint.CreateTranslationTextAsync(request);
Console.WriteLine(response);

Given a prompt and/or an input image, the model will generate a new image.

The Images API is accessed via OpenAIClient.ImagesEndpoint

Creates an image given a prompt.

using var api = new OpenAIClient();
var request = new ImageGenerationRequest("A house riding a velociraptor", Models.Model.DallE_3);
var imageResults = await api.ImagesEndPoint.GenerateImageAsync(request);

foreach (var image in imageResults)
{
    Console.WriteLine(image);
    // image == url or b64_string
}

Creates an edited or extended image given an original image and a prompt.

using var api = new OpenAIClient();
var request = new ImageEditRequest(imageAssetPath, maskAssetPath, "A sunlit indoor lounge area with a pool containing a flamingo", size: ImageSize.Small);
var imageResults = await api.ImagesEndPoint.CreateImageEditAsync(request);

foreach (var image in imageResults)
{
    Console.WriteLine(image);
    // image == url or b64_string
}

Creates a variation of a given image.

using var api = new OpenAIClient();
var request = new ImageVariationRequest(imageAssetPath, size: ImageSize.Small);
var imageResults = await api.ImagesEndPoint.CreateImageVariationAsync(request);

foreach (var image in imageResults)
{
    Console.WriteLine(image);
    // image == url or b64_string
}

Files are used to upload documents that can be used with features like Fine-tuning.

The Files API is accessed via OpenAIClient.FilesEndpoint

Returns a list of files that belong to the user's organization.

using var api = new OpenAIClient();
var fileList = await api.FilesEndpoint.ListFilesAsync();

foreach (var file in fileList)
{
    Console.WriteLine($"{file.Id} -> {file.Object}: {file.FileName} | {file.Size} bytes");
}

Upload a file that can be used across various endpoints. The size of all the files uploaded by one organization can be up to 100 GB.

The size of individual files can be a maximum of 512 MB. See the Assistants Tools guide to learn more about the types of files supported. The Fine-tuning API only supports .jsonl files.

using var api = new OpenAIClient();
var file = await api.FilesEndpoint.UploadFileAsync("path/to/your/file.jsonl", FilePurpose.FineTune);
Console.WriteLine(file.Id);

Delete a file.

using var api = new OpenAIClient();
var isDeleted = await api.FilesEndpoint.DeleteFileAsync(fileId);
Assert.IsTrue(isDeleted);

Returns information about a specific file.

using var api = new OpenAIClient();
var file = await  api.FilesEndpoint.GetFileInfoAsync(fileId);
Console.WriteLine($"{file.Id} -> {file.Object}: {file.FileName} | {file.Size} bytes");

Downloads the file content to the specified directory.

using var api = new OpenAIClient();
var downloadedFilePath = await api.FilesEndpoint.DownloadFileAsync(fileId, "path/to/your/save/directory");
Console.WriteLine(downloadedFilePath);
Assert.IsTrue(File.Exists(downloadedFilePath));

Manage fine-tuning jobs to tailor a model to your specific training data.

Related guide: Fine-tune models

The Files API is accessed via OpenAIClient.FineTuningEndpoint

Creates a job that fine-tunes a specified model from a given dataset.

Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.

using var api = new OpenAIClient();
var fileId = "file-abc123";
var request = new CreateFineTuneRequest(fileId);
var job = await api.FineTuningEndpoint.CreateJobAsync(Model.GPT3_5_Turbo, request);
Console.WriteLine($"Started {job.Id} | Status: {job.Status}");

List your organization's fine-tuning jobs.

using var api = new OpenAIClient();
var jobList = await api.FineTuningEndpoint.ListJobsAsync();

foreach (var job in jobList.Items.OrderByDescending(job => job.CreatedAt))
{
    Console.WriteLine($"{job.Id} -> {job.CreatedAt} | {job.Status}");
}

Gets info about the fine-tune job.

using var api = new OpenAIClient();
var job = await api.FineTuningEndpoint.GetJobInfoAsync(fineTuneJob);
Console.WriteLine($"{job.Id} -> {job.CreatedAt} | {job.Status}");

Immediately cancel a fine-tune job.

using var api = new OpenAIClient();
var isCancelled = await api.FineTuningEndpoint.CancelFineTuneJobAsync(fineTuneJob);
Assert.IsTrue(isCancelled);

Get status updates for a fine-tuning job.

using var api = new OpenAIClient();
var eventList = await api.FineTuningEndpoint.ListJobEventsAsync(fineTuneJob);
Console.WriteLine($"{fineTuneJob.Id} -> status: {fineTuneJob.Status} | event count: {eventList.Events.Count}");

foreach (var @event in eventList.Items.OrderByDescending(@event => @event.CreatedAt))
{
    Console.WriteLine($"  {@event.CreatedAt} [{@event.Level}] {@event.Message}");
}

Create large batches of API requests for asynchronous processing. The Batch API returns completions within 24 hours for a 50% discount.

The Batches API is accessed via OpenAIClient.BatchesEndpoint

List your organization's batches.

using var api = new OpenAIClient();
var batches = await api.await OpenAIClient.BatchEndpoint.ListBatchesAsync();

foreach (var batch in listResponse.Items)
{
    Console.WriteLine(batch);
}

Creates and executes a batch from an uploaded file of requests

using var api = new OpenAIClient();
var batchRequest = new CreateBatchRequest("file-id", Endpoint.ChatCompletions);
var batch = await api.BatchEndpoint.CreateBatchAsync(batchRequest);

Retrieves a batch.

using var api = new OpenAIClient();
var batch = await api.BatchEndpoint.RetrieveBatchAsync("batch-id");
// you can also use convenience methods!
batch = await batch.UpdateAsync();
batch = await batch.WaitForStatusChangeAsync();

Cancels an in-progress batch. The batch will be in status cancelling for up to 10 minutes, before changing to cancelled, where it will have partial results (if any) available in the output file.

using var api = new OpenAIClient();
var isCancelled = await api.BatchEndpoint.CancelBatchAsync(batch);
Assert.IsTrue(isCancelled);

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

Related guide: Embeddings

The Edits API is accessed via OpenAIClient.EmbeddingsEndpoint

Creates an embedding vector representing the input text.

using var api = new OpenAIClient();
var response = await api.EmbeddingsEndpoint.CreateEmbeddingAsync("The food was delicious and the waiter...", Models.Embedding_Ada_002);
Console.WriteLine(response);

Given a input text, outputs if the model classifies it as violating OpenAI's content policy.

Related guide: Moderations

The Moderations API can be accessed via OpenAIClient.ModerationsEndpoint

Classifies if text violates OpenAI's Content Policy.

using var api = new OpenAIClient();
var isViolation = await api.ModerationsEndpoint.GetModerationAsync("I want to kill them.");
Assert.IsTrue(isViolation);

Additionally you can also get the scores of a given input.

using var api = new OpenAIClient();
var response = await api.ModerationsEndpoint.CreateModerationAsync(new ModerationsRequest("I love you"));
Assert.IsNotNull(response);
Console.WriteLine(response.Results?[0]?.Scores?.ToString());

openai-dotnet's People

Contributors

adefwebserver avatar chsword avatar damiant3 avatar henduck avatar kikaragyozov avatar michalblaha avatar mitch528 avatar sibbl avatar stephenhodgson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai-dotnet's Issues

How do i send over a 16k token prompt ?

Hi,
First off, nice library and thank you for sharing 👍 .
I got a question regarding "huge" prompts. Using :

var api = new OpenAIClient();
var chatPrompts = new List<ChatPrompt>
{
    new ChatPrompt("system", "You are a helpful assistant."),
    new ChatPrompt("user", "Who won the world series in 2020?"),
    new ChatPrompt("assistant", "The Los Angeles Dodgers won the World Series in 2020."),
    new ChatPrompt("user", "Where was it played?"),
};
var chatRequest = new ChatRequest(chatPrompts, Model.GPT3_5_Turbo);

await api.ChatEndpoint.StreamCompletionAsync(chatRequest, result =>
{
    Console.WriteLine(result.FirstChoice);
});

and adding chunks of 4k tokens as ChatPrompts then sending the chat prompts via var chatRequest = new ChatRequest(chatPrompts, Model.GPT3_5_Turbo); seems to result in a 4096 limit complaint as a response. How would i be able to send such kind of prompts then ask a question ? Thank you !

Proxy throws exception on non-Latin characters

Thanks for your hard work! ❤️

For the bug, when sending a chatprompt like:

how do you say hello in Greek

The response should be:

The word for "hello" in Greek is "Χαίρετε" (pronounced as "Hei-re-te").

This works just fine when using a direct connection to OpenAI.

However, with the proxy in between, I get an HttpRequestException exception:

Error while copying content to a stream.

This is most likely caused by the encoding, because when asked:

how do you say hello in French

It correctly replies:

"Bonjour" is the way to say hello in French.

EditEndpoint no longer works

Bug Report

Overview

Using the EditEndpoint, it does not work. Davinci 3 removed. And i can't add my own because you added some weird validation (-edit- ???)

ChatResponse.Usage in StreamCompletionEnumerableAsync is null while having value in GetCompletionAsync

        [Test]
        public async Task Test_1_GetChatCompletion()
        {
            Assert.IsNotNull(OpenAIClient.ChatEndpoint);
            var messages = new List<Message>
            {
                new Message(Role.System, "You are a helpful assistant."),
                new Message(Role.User, "Who won the world series in 2020?"),
                new Message(Role.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
                new Message(Role.User, "Where was it played?"),
            };
            var chatRequest = new ChatRequest(messages, number: 2);
            var result = await OpenAIClient.ChatEndpoint.GetCompletionAsync(chatRequest);
            Assert.IsNotNull(result);
            Assert.IsNotNull(result.Choices);
            Assert.IsTrue(result.Choices.Count == 2);

            foreach (var choice in result.Choices)
            {
                Console.WriteLine($"[{choice.Index}] {choice.Message.Role}: {choice.Message.Content} | Finish Reason: {choice.FinishReason}");
				if (result.Usage != null)
				{
					System.Diagnostics.Trace.WriteLine($"Usage: {result.Usage.TotalTokens} "); // works!!!
				}
			}
		}
        [Test]
        public async Task Test_3_GetChatStreamingCompletionEnumerableAsync()
        {
            Assert.IsNotNull(OpenAIClient.ChatEndpoint);
            var messages = new List<Message>
            {
                new Message(Role.System, "You are a helpful assistant."),
                new Message(Role.User, "Who won the world series in 2020?"),
                new Message(Role.Assistant, "The Los Angeles Dodgers won the World Series in 2020."),
                new Message(Role.User, "Where was it played?"),
            };
            var chatRequest = new ChatRequest(messages, number: 1);
            await foreach (var result in OpenAIClient.ChatEndpoint.StreamCompletionEnumerableAsync(chatRequest))
            {
                Assert.IsNotNull(result);
                Assert.IsNotNull(result.Choices);
                Assert.NotZero(result.Choices.Count);

                foreach (var choice in result.Choices.Where(choice => choice.Delta?.Content != null))
                {
                    //Console.WriteLine($"[{choice.Index}] {choice.Delta.Content}");
					System.Diagnostics.Trace.WriteLine($"[{choice.Index}] {choice.Delta.Content}");
				}

                foreach (var choice in result.Choices.Where(choice => choice.Message?.Content != null))
                {
					//Console.WriteLine($"[{choice.Index}] {choice.Message.Role}: {choice.Message.Content} | Finish Reason: {choice.FinishReason}");
					System.Diagnostics.Trace.WriteLine($"[{choice.Index}] {choice.Message.Role}: {choice.Message.Content} | Finish Reason: {choice.FinishReason}");
				}

				if (result.Usage != null)
                {
					System.Diagnostics.Trace.WriteLine($"Usage: {result.Usage.TotalTokens} "); // never happened!!!
				}
			}
		}

Actually I hope the reason is I don't really understand the lifecycle of [IAsyncEnumerable{T}], and the value will be filled finally? So how can I get it? Thanks!

Naming should follow .NET conventions, default argument values should be actual default values

Feature Request

Let's looks at these arguments:

string prompt = null,
string[] prompts = null,
string suffix = null,
int? max_tokens = null,
double? temperature = null,
double? top_p = null,
int? numOutputs = null,
double? presencePenalty = null,
double? frequencyPenalty = null,
int? logProbabilities = null,
bool? echo = null,
string[] stopSequences = null,

Problem #1: names like top_p and max_tokens use snake_case. This violates .NET naming conventions which mandate using camelCase for arguments. They should be named topP and maxTokens instead.

Problem #2: arguments like temperature and presencePenalty have real default values (1.0 and 0.0), according to API documentation. That means they should be double temperature = 1.0d and double presencePenalty = 1.0d, respectively.

Problem #3: naming of logProbabilities is inconsistent. The relevant class is named Logprobs. Naming conventions prescribe using full words, so the class should be named LogProbabilities (as well as all related identifiers).

Problem #4: prompts and stopSequences accept only arrays. They should be IEnumerable<string> to be able to accept List<string> and other types without creating arrays.

Problem #5: I'm not a fan of echo as the official name is confusing. I'd prefer something like echoPrompt. Not sure whether nicer or official names should be prioritized.

Problem #6: I don't see any value in having Endpoint in the name of properties like CompletionsEndpoint CompletionsEndpoint { get; } when almost all properties are endpoints.

Is your feature request related to a problem? Please describe.

C# Coding Conventions

(Huh, _ prefix is official now.)

Describe the solution you'd like

Suggested solutions are provided within the list of problems above.

Describe alternatives you've considered

An alternative is violating C# naming conventions and using official names from OpenAI API documentation, but considering the current naming is much closer to C# than to OpenAI, renaming to the C# way would be easier (and better, obviously).

Additional context

This is obviously a breaking change, which can be annoying for the current users. But considering OpenAI's API is kinda "beta", it's better to do it now rather than later.

Remove requirement for Microsoft.AspNetCore.App

Feature Request

Is your feature request related to a problem? Please describe.

Version 5 of OpenAI-DotNet was running without problems on dotnet/runtime docker base image (the one with minimal dotnet and no asp.net libraries). Version 6.3.1 requires running on dotnet/aspnet base image. https://github.com/RageAgainstThePixel/OpenAI-DotNet/blob/main/OpenAI-DotNet/OpenAI-DotNet.csproj#L102 . Now it is impossible to use this library in console apps (e.g. telegram bot).

Describe the solution you'd like

Remove requirements for asp.net.

Describe alternatives you've considered

Run console apps on asp.net docker images (much heavier).

Add Optional Mask parameter overload for image edits

Didn't want to make this to complicated :

The edit image function has the mask as optional as per the API reference, but with this repo it's required, if you can be so kind to set it to optional 👍

Or if I am completely wrong and I need to use it in some other form feel free to explain!

quick edit, Love the rest of the repo , and thanks for making the nuget a big more visible!

ChatRequest forced function calls

Bug Report

Overview

Currently unable to manually call a function using the function_call parameter when using chat completion due to the API expecting either a string ("auto", "none") or a JSON object in the form of { "name": "function_name" }.

The docs suggest it should be a JSON object through a string however doing so results in a bad request error.

Exception:

System.Net.Http.HttpRequestException: StreamCompletionEnumerableAsync Failed! HTTP status code: BadRequest | Response body: {
  "error": {
    "message": "'$.function_call' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.",
    "type": "invalid_request_error",
    "param": null,
    "code": null
  }
}

To Reproduce

Steps to reproduce the behavior:

            var functions = new List<Function>
            {
                new Function(
                    nameof(WeatherService.GetCurrentWeather),
                    "Get the current weather in a given location",
                    new JsonObject
                    {
                        ["type"] = "object",
                        ["properties"] = new JsonObject
                        {
                            ["location"] = new JsonObject
                            {
                                ["type"] = "string",
                                ["description"] = "The city and state, e.g. San Francisco, CA"
                            },
                            ["unit"] = new JsonObject
                            {
                                ["type"] = "string",
                                ["enum"] = new JsonArray {"celsius", "fahrenheit"}
                            }
                        },
                        ["required"] = new JsonArray { "location", "unit" }
                    })
            };

            var chatRequest = new ChatRequest(messages, functions: functions, functionCall: "{ \"name\": \"GetCurrentWeather\" }", model: "gpt-3.5-turbo-0613");

Expected behavior

Request to API not to fail.

Default API version 2022-12-01 results in 404 on chat completion request for new Azure OpenAI deployments

Bug Report

Overview

By default, "2022-12-01" is used as the ApiVersion. However, this results in a 404 being returned from Azure OpenAI when the deployment was recently created. With the same setup and only setting ApiVersion to "2023-05-15", a valid result is returned.

{
	"error": {
		"code": "404",
		"message": "Resource not found"
	}
}

To Reproduce

Steps to reproduce the behavior:

  1. Create a new OpenAIClient with only resourceName and deploymentId from Azure being set.
  2. Use the api.ChatEndpoint.GetCompletionAsync method to make a completion request

Expected behavior

It should work out of the box.

Screenshots

image image

Additional context

For the first basic tests, updating the ApiVersion seems to be no problem. However, in a quick search I could not find a diff between both API versions. Maybe this issue could be migrated into a generic "support API version 2023-05-15" feature support ticket?

Add DI Support for OpenAIClient

Feature Request

It would be nice to have some extension methods that setup the OpenAIClient as a registered dependency in the IServiceCollection interface.

Add Text Tokenizer

Feature Request

Add a way to tokenize text so that it can be passed as an input (like logit_bias) for models

Is your feature request related to a problem? Please describe.

I am trying to use OpenAI APIs like completion, in that, there is an option to pass "logit_bias" but to currently there is no wat to generate the proper token of a text in order to pass in that.

Describe the solution you'd like

.Net implementation of OpenAI's Tokenizer

Describe alternatives you've considered

There is an existing MIT-licenced nuget package called GPT-3-Encoder-Sharp that does it.

Blazor WASM Hosted - Request Fails from Razor Page

Bug Report

Overview

I am testing OpenAI-DotNet in Blazor WASM Hosted [Client/Server].
The sample code below work in Blazor Server.
However in Blazor WASM Hosted the chat request fails with response 'The given header was not found.'
I can get a list of models using sample code provided:

protected override async Task OnInitializedAsync()
    {
string ApiKey = "<Your API-KEY>";
var api = new OpenAIClient(new OpenAIAuthentication(ApiKey, Organization));
        var models = await api.ModelsEndpoint.GetModelsAsync();
        foreach (var model in models)
        {
            Console.WriteLine("OpenAIModels: " + model.ToString());
        }
}

However a simple request fails with the 'response 'The given header was not found.'

protected override async Task OnInitializedAsync()
    {
string ApiKey = "<Your API-KEY>";
var api = new OpenAIClient(new OpenAIAuthentication(ApiKey, Organization));
await api.CompletionsEndpoint.StreamCompletionAsync(result =>
       {
          foreach (var choice in result.Completions)
               {
               Console.WriteLine(choice);
       }
       }, "My name is Roger and I am a principal software engineer at Salesforce.  This is my resume:", maxTokens: 200, temperature: 0.5, presencePenalty: 0.1, frequencyPenalty: 0.1, model: OpenAI.Models.Model.Davinci);
}
}

To Reproduce

Steps to reproduce the behavior:

  1. Create Blazor WASM Hosted solution
  2. Add Nuget OpenAI-DotNet to Client
  3. Add razor page and add the code above [Note an ApiKey is required]

Expected behavior

Response: I am an experienced Software Engineer with 5+ years of experience...

Additional context

crit:

Microsoft.AspNetCore.Components.WebAssembly.Rendering.WebAssemblyRenderer[100]
      Unhandled exception rendering component: The given header was not found.
System.InvalidOperationException: The given header was not found.
   at System.Net.Http.Headers.HttpHeaders.GetValues(HeaderDescriptor descriptor)
   at System.Net.Http.Headers.HttpHeaders.GetValues(String name)
   at OpenAI.ResponseExtensions.SetResponseData(BaseResponse response, HttpResponseHeaders headers)
   at OpenAI.ResponseExtensions.DeserializeResponse[CompletionResult](HttpResponseMessage response, String json, JsonSerializerOptions settings)
   at OpenAI.Completions.CompletionsEndpoint.CreateCompletionAsync(CompletionRequest completionRequest, CancellationToken cancellationToken)
   at OpenAI.Completions.CompletionsEndpoint.CreateCompletionAsync(String prompt, IEnumerable`1 prompts, String suffix, Nullable`1 maxTokens, Nullable`1 temperature, Nullable`1 topP, Nullable`1 numOutputs, Nullable`1 presencePenalty, Nullable`1 frequencyPenalty, Nullable`1 logProbabilities, Nullable`1 echo, IEnumerable`1 stopSequences, Model model, CancellationToken cancellationToken)
   at BlazorFileSystem.Client.Pages.OpenAI_5.GenerateResponse() in C:\_PTBSX-Dev\_Blazor\BTE\BlazorFileSystem\BlazorFileSystem\Client\Pages\OpenAI-5.razor:line 42
   at Microsoft.AspNetCore.Components.ComponentBase.CallStateHasChangedOnAsyncCompletion(Task task)
   at Microsoft.AspNetCore.Components.RenderTree.Renderer.GetErrorHandledTask(Task taskToHandle, ComponentState owningComponentState)

Add chunk size parameter to EmbeddingsRequest

Feature Request

Azure OpenAI only allows one single string to be part of an embeddings request. Other frameworks have a chunk_size or embed_batch_size parameter for this.

Describe the solution you'd like

I'd propose a int? ChunkSize = null parameter for the EmbeddingsRequest. If it's > 0, the there should be multiple requests being made with n lines per requests.

Describe alternatives you've considered

I did the chunking myself, but as other frameworks have this built-in, we might also want to add such a parameter here.

Additional context

Quote from MS docs about this limitation:

I am trying to use embeddings and received the error "InvalidRequestError: Too many inputs. The max number of inputs is 1." How do I fix this?
This error typically occurs when you try to send a batch of text to embed in a single API request as an array. Currently Azure OpenAI does not support batching with embedding requests. Embeddings API calls should consist of a single string input per request. The string can be up to 8191 tokens in length when using the text-embedding-ada-002 (Version 2) model.

Setting http request timeout

Feature Request

Is your feature request related to a problem? Please describe.

I need to increase default timeout for requests to OpenAI api.

Describe the solution you'd like

Make internal constructor with HttpClient parameter public.

Describe alternatives you've considered

Reflection.

Azure OpenAI - API Key prefix issue

Bug Report

Library expects API Key to starts with "sk-". This works for OpenAI keys, but Azure's OpenAI api keys does not start with this prefix.

Overview

OPENAI Api Key prefix = "sk-" - correct
Azure OpenAI Api Key prefix = "" - Azure has no prefix.

This resulted in the wrong key being passed to Azure OpenAI.

Add option to not validate API keys

Feature Request

Problem

Some proxies that give access to OpenAI API do not have their keys start with sk- or sess-, and some users may want to use them.

Solution

Add option to disable validation of API keys in constructor/client settings.

Workaround

  1. Clone the repository
  2. Go to ./OpenAI-DotNet/OpenAIClient.cs, find method SetupClient
  3. Remove line throw new InvalidCredentialException($"{OpenAIAuthentication.ApiKey} must start with '{AuthInfo.SecretKeyPrefix}'");
  4. Build the repository and add reference to your newly built library

Support for gpt-3.5-turbo

Feature Request

Is your feature request related to a problem? Please describe.

When requesting a completion for the new model "gpt-3.5-turbo", I get the following error:
This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions

Describe the solution you'd like

It seems that they have moved the endpoint url for new models to v1/chat/completions

Describe alternatives you've considered

Maybe a new bool (isNewModel) to define whether or not to add this "/chat/".

Thanks!

Unable to use fine tuned model

It appears that you should be able to pass in a model ID to openAI API to use a specified fine tuned model:

https://platform.openai.com/docs/api-reference/completions

however:

if (!Model.Contains("turbo") &&
!Model.Contains("gpt-4"))
{
throw new ArgumentException($"{Model} is not supported", nameof(model));
}

seems like you must have it named in a particular way. further,

in the createfinetunejobrequest class:

I believe model should not be defaulted to null. It should be a required parameter - to name the model for the fine tune request.

Issue with CreateCompletionAsync hanging on specific routes

Hello,

I've encountered a perplexing issue with your library. Specifically, the CreateCompletionAsync method hangs indefinitely when routed to OpenAI API endpoints that have trace times of around 3-4 seconds.

Here is the problematic call I'm making:

var result = await _client.CompletionsEndpoint.CreateCompletionAsync(
    prompt: $"Translate this to English:\n\n{text}",
    model: Model.Davinci,
    temperature: 0.3,
    maxTokens: 300,
    topP: 1.0,
    frequencyPenalty: 0.0,
    presencePenalty: 0.0
);

When the request is routed to API endpoints with trace times less than 3 seconds, it executes correctly. However, when the routing takes longer (about 3-4 seconds), the call hangs until a Microsoft-level timeout occurs.

Do you have any insights as to why this is happening? Could there be a conflict between the library's async mechanisms and the longer-than-usual response time of some routes? How might I fix this issue?

Thank you in advance for your help.

Update the streaming examples to use `string.IsNullOrEmpty` instead of `string.IsNullOrWhiteSpace`

I suggest updating the streaming examples in the README.md file to use string.IsNullOrEmpty instead of string.IsNullOrWhiteSpace.

What happens is \n is considered a whitespace, so when streaming the GPT markdown output, say there is a list, it is impossible to render the list correctly until receiving the final, complete message at the end, which defeats the purpose of steaming it in the first place.

I scratched my head around this for a bit yesterday evening and dug way too deep into the code (including the BCL) to finally realize it was just the example I started with that introduced the issue. So my conclusion is: if I did hit this issue, others might as well, so why not make it better to save them some time?

I can open a PR if you want.

Thanks

Program exits with code 0 and does nothing.

Bug Report

Overview

code used:

var api = new OpenAIClient("sk-my.key");
var models = await api.ModelsEndpoint.GetModelsAsync();
foreach (var model in models)
{
    Console.WriteLine(model.ToString());
}

when running this code, the program will crash and send an exit code 0. No idea why it would break so hard that it does not even generate an Exception.

TestFixture_00_Authentication fails

Bug Report

Overview

TestFixture_00_Authentication fails

To Reproduce

Steps to reproduce the behavior:

  1. Enter OPENAI_API_KEY and OPENAI_ORGANIZATION_ID in environment variables
  2. Run Tests
  3. Get:
    Test_01_GetAuthFromEnv
     Source: TestFixture_00_Authentication.cs line 21
     Duration: 47.3 sec

Message: 
Expected: null
But was: "{{ my org key }}"

Expected behavior

test should pass

finish_reason support in streams

As I can see, only stop finish reason is supported now in chat and completion streams.
Can you add length and content_filter finish reasons please?
Thnx.

Support Azure OpenAI endpoints

Feature Request

Is your feature request related to a problem? Please describe.

Microsoft also has an option to host your own instances of OpenAI on Azure

Describe the solution you'd like

A nice solution that doesn't complicate things too much.

Describe alternatives you've considered

@DanMMSFT suggested #6 but it's been closed due to the changes in the library since that time.
Some endpoints seem like they need to be majorly retrofitted.

Additional context

Documentation: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/

Having problems with any model other than Davinci.

Bug Report

Overview

When using any model other than Davinci, the call to a chat completion fails with the error xxx Model is not supported.

To Reproduce

Steps to reproduce the behavior:
OpenAI.Completions.CompletionResult result = await api.CompletionsEndpoint.CreateCompletionAsync(request, temperature: 0, model: Model.GPT3_5_Turbo, maxTokens: 200);
returns gpt-3.5-turbo is not supported (Parameter 'model')

Expected behavior

OpenAI.Completions.CompletionResult result = await api.CompletionsEndpoint.CreateCompletionAsync(request, temperature: 0, model: Model.Davinci, maxTokens: 200);

Returns the expected response.

Screenshots

Additional context

I have access to all of the current models with the exception of GPT-4 32K

Project structure and easy(ier) to edit files

Feature Request

Clean up of the project structure

Is your feature request related to a problem? Please describe.

Not all files are available within the IDE (VS2022) which makes them hard(er) to edit.

Describe the solution you'd like

  1. Add solution directories to contain these files.
  2. Split up Tests and Source
  3. Add Examples directory to contain example applications. Not needed as of yet
  4. Add editorconfig

cannot install the package in my vsto office addin project

Could you please help to take a look:

Install-Package : Could not install package 'OpenAI-DotNet 5.1.0'. You are trying to install this package into a project that targets '.NETFramework,Version=v4.7.2', but the package does no
t contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.
At line:1 char:1

  • Install-Package OpenAI-DotNet
  •   + CategoryInfo          : NotSpecified: (:) [Install-Package], Exception
      + FullyQualifiedErrorId : NuGetCmdletUnhandledException,NuGet.PackageManagement.PowerShellCmdlets.InstallPackageCommand
    
    

Time Elapsed: 00:00:03.0833628

Unable to use the OpenAI-DotNet client against the OpenAI-DotNet-Proxy for an Azure Open AI deployment

Bug Report

Overview

When using the Proxy set against an Azure Open AI instance, the example configuration for a client call against the proxy in the README doesn't appear to work when explicitly setting the domain. The construction assumes an api version endpoint of v1 (and has a check to prevent an empty v1, presumably for the openai endpoints). I went back to see if the version before that change would work (and explicitly set the api version to ""), but it's appending an extra / to the BaseRequest/BaseRequestUrlFormat, so the proxy is returning a 404.

Installed versions:
OpenAi-DotNet-Proxy==6.4.1
OpenAI-DotNet==6.4.1 and 7.0.1

To Reproduce

Steps to reproduce the behavior:

  1. Load up a proxy against an Azure Open AI instance
  2. Write a small console program using the example (I just used the example client call with CreateCompletionAsync).
  3. See 404

Expected behavior

I expect the proxy to be able to forward a request generated by the client, but because of the way the client is modifying the BaseRequest/BaseRequestUrlFormat, I don't think the OpenAI-DotNet client can be used. I confirmed this by loading up Postman and issuing the request that should be forwarded, and the proxy did work.

Chat streaming in 6.8.6

In v 6.8.6 Chat streaming content trimmed from end.
It seems that the very last piece of text produced by OpenAI API is lost.

Use IHttpClientFactory or pooled connections

Feature Request

Problem

HttpClient has known issues with socket exhaustion and dns change detection. These issues (as well as solutions) are described in this article from Microsoft https://learn.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests

Solution

Update the code to use the IHttpClientFactory to create short lived HttpClients that are created/destroyed when making requests. Or use a single HttpClient instance that has a PooledConnectionLifetime set to a timespan other than Timeout.Inifinite. Since this project is already using a singleton HttpClient instance it looks like the best (easiest) solution is the latter, in which the PooledConnectionLifetime property is set.

Usage is always null when done streaming chat

Bug Report

Overview

Usage is always null when streaming, even if the stream is complete.

To Reproduce

Attempt a stream against the chat models via StreamCompletionEnumerableAsync.

Expected behavior

At the end, I would expect the last result to contain Usage, but it doesn't (at least on my PC).

Screenshots

image

Additional context

The non-streaming way does return non-null Usage.

Response data is not properly being logged when unsuccessful.

Feature Request

Is your feature request related to a problem? Please describe.

Please provide more information from Response when IsSuccessStatusCode = false

Describe the solution you'd like

When CompletionRequest's Response.IsSuccessStatusCode = false much of the useful information being returned from the server is lost. For example, "BadRequest" status code doesn't explain why it was a bad request which the server's response often explains in more detail.

Proposed solution:

Include the server's Response.Content in the message being bubbled up in the exception. Consider including the following in the message included in the HttpRequestException message:
Content: {await response.Content.ReadAsStringAsync()}

More context:

Using OpenAI CompletionRequest was failing for me via BadRequest errors when my prompt was growing past about 384 characters in length. After tweaking the code to provide the Response.Content I found the server was telling me the total tokens (prompt+response) were going over the max. The response also included the solution to resolve the problem, but that critical information was not being bubbled up.

Whisper

Feature Request

Is your feature request related to a problem? Please describe.

Support OpenAI Whisper

Describe the solution you'd like

OpenAI has released Whisper and I would like this library to support it

Additional context

I am willing to submit a PR

Add Dashboard/Usage API

Feature Request

Is your feature request related to a problem? Please describe.

OpenAI Dashboard uses the same endpoint that prompting and can be easily used to display all information about current usage and billing. This API seems to not be documented Eg:

https://api.openai.com/dashboard/billing/usage?end_date=2023-05-01&start_date=2023-04-01

image

/dashboard requires different authentication
/v1 but undocumented method work with API keys

Describe the solution you'd like

Information about billing and other dashboard information could be fetched using the same package

Describe alternatives you've considered

PR or writing extending the OpenAI-DotNet functionality to reuse auth keys

Additional context

Other endpoints that could prove useful

https://api.openai.com/v1/usage?date=2023-04-07
https://api.openai.com/v1/organizations/org-XXX/users

https://api.openai.com/dashboard/billing/usage?end_date=2023-05-01&start_date=2023-04-01
https://api.openai.com/dashboard/billing/credit_grants
https://api.openai.com/dashboard/billing/subscription
https://api.openai.com/dashboard/organizations/org-XXX/features
https://api.openai.com/dashboard/user/api_keys

Renovations Needed

Thanks for keeping this library going. I ran into a few issues:

  • The Engine endpoint that you use for completions appears to be marked obsolete in the docs. There's a new endpoint for that now, with similar semantics.
  • I'm not sure you are using the newest version of davinci, or that it's possible to use it the way this is set up. I may be missing something.
  • When doing a ~2000 token request that contained some text that had to be escaped by your API before the JSON could be transmitted, I continuously received responses very different from what I got through playground. Something is wrong there.

Not sure if that's the engine, the endpoint, or a quirk of escaping special characters. I spent hours trying to get this to work and then discovered this repo: https://github.com/betalgo/openai which uses newer endpoints and with that one, the problems went away. Your syntax is a little more to my liking and your API implementation is more robust. But I've switched over for now. Just wanted to say thanks for the hard work and to share my experience.

Best model for advance translation?

HI,

Im looking for best way to use GPT for text translation. Can you please advice me how, or what enpoint use it for that ? I would like to also pass instruction for context and glossary.

thank you for help

Enable Github Discussions

Feature Request

Is your feature request related to a problem? Please describe.

Allow users to discuss issues without raising a Github "Issue"

Describe the solution you'd like

Enable Github Discussions

Describe alternatives you've considered

N/A

Question regarding conversations

Hi there,

For starters - thanks for your great work! Adding support for a proxy is going above and beyond, it's really appreciated :)

I have a question though, I can't seem to find how to hold actual conversations. In the previous forked library, you create a conversation, and then add user input etc:

var chat = api.Chat.CreateConversation();
chat.AppendUserInput("Is this an animal? Cat");

In your version, I created a ChatRequest with a List<ChatPrompt> bound to it. I then created an extension method to easily add ChatPrompt objects to that list, with either system, assistant or user as its role.

At first that works fine. I add a system message explaining to the assistant what his role is, and it introduces itself.

However, after adding a new ChatPrompt with a user-role question and then running ChatEndpoint.GetCompletionAsync(_chatRequest) again, the conversation just starts over and the assistant introduces itself again.

How can I keep the conversation going, without having to create a new ChatRequest every time? Or is that the way to go?

.NET version?

Could not install package 'OpenAI-DotNet 3.0.1'. You are trying to install this package into a project that targets '.NETFramework,Version=v4.8', but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.

I'm getting the following error. Does this mean it does not work on => 4.7.2?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.