Giter VIP home page Giter VIP logo

openai.ex's Introduction

Openai.ex

Hex.pm Version Hex.pm Download Total

Unofficial community-maintained wrapper for OpenAI REST APIs See https://platform.openai.com/docs/api-reference/introduction for further info on REST endpoints

⚠️⚠️⚠️Disclaimer: Please be advised that addressing issues or pull requests (PRs) may experience delays, as my current workload involves prioritizing other projects. Consequently, the library may not consistently reflect the latest API specifications. You can explore alternative projects here:

Thank you for your understanding and support and thanks to everyone who has contributed to the library so far!

Installation

Add :openai as a dependency in your mix.exs file.

def deps do
  [
    {:openai, "~> 0.6.2"}
  ]
end

Configuration

You can configure openai in your mix config.exs (default $project_root/config/config.exs). If you're using Phoenix add the configuration in your config/dev.exs|test.exs|prod.exs files. An example config is:

import Config

config :openai,
  # find it at https://platform.openai.com/account/api-keys
  api_key: "your-api-key",
  # find it at https://platform.openai.com/account/org-settings under "Organization ID"
  organization_key: "your-organization-key",
  # optional, use when required by an OpenAI API beta, e.g.:
  beta: "assistants=v1",
  # optional, passed to [HTTPoison.Request](https://hexdocs.pm/httpoison/HTTPoison.Request.html) options
  http_options: [recv_timeout: 30_000],
  # optional, useful if you want to do local integration tests using Bypass or similar
  # (https://github.com/PSPDFKit-labs/bypass), do not use it for production code,
  # but only in your test config!
  api_url: "http://localhost/"

Note: you can load your os ENV variables in the configuration file, if you set an env variable for API key named OPENAI_API_KEY you can get it in the code by doing System.get_env("OPENAI_API_KEY").

⚠️config.exs is compile time, so the get_env/1 function is executed during the build, if you want to get the env variables during runtime please use runtime.exs instead of config.exs in your application (elixir doc ref).

Configuration override

Client library configuration can be overwritten in runtime by passing a %OpenAI.Config{} struct as last argument of the function you need to use. For instance if you need to use a different api_key, organization_key or http_options you can simply do:

config_override = %OpenAI.Config{ api_key: "test-api-key" } # this will return a config struct with "test-api-key" as api_key, all the other config are defaulted by the client by using values taken from config.exs, so you don't need to set the defaults manually

# chat_completion with overriden config
OpenAI.chat_completion([
  model: "gpt-3.5-turbo",
  messages: [
        %{role: "system", content: "You are a helpful assistant."},
        %{role: "user", content: "Who won the world series in 2020?"},
        %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
        %{role: "user", content: "Where was it played?"}
    ]
  ],
  config_override # <--- pass the overriden configuration as last argument of the function
)


# chat_completion with standard config
OpenAI.chat_completion(
  model: "gpt-3.5-turbo",
  messages: [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "Who won the world series in 2020?"},
      %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
      %{role: "user", content: "Where was it played?"}
  ]
)

you can perform a config override in all the functions, note that params argument must be passed explicitly as a list in square brackets if the configuration is to be overwritten, as in the example above.

Usage overview

Get your API key from https://platform.openai.com/account/api-keys

models()

Retrieve the list of available models

Example request

OpenAI.models()

Example response

{:ok, %{
  data: [%{
    "created" => 1651172505,
    "id" => "davinci-search-query",
    "object" => "model",
    "owned_by" => "openai-dev",
    "parent" => nil,
    "permission" => [
      %{
        "allow_create_engine" => false,
        "allow_fine_tuning" => false,
        "allow_logprobs" => true,
        ...
      }
    ],
    "root" => "davinci-search-query"
  },
  ....],
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/models/list

models(model_id)

Retrieve specific model info

OpenAI.models("davinci-search-query")

Example response

{:ok,
 %{
   created: 1651172505,
   id: "davinci-search-query",
   object: "model",
   owned_by: "openai-dev",
   parent: nil,
   permission: [
     %{
       "allow_create_engine" => false,
       "allow_fine_tuning" => false,
       "allow_logprobs" => true,
       "allow_sampling" => true,
       "allow_search_indices" => true,
       "allow_view" => true,
       "created" => 1669066353,
       "group" => nil,
       "id" => "modelperm-lYkiTZMmJMWm8jvkPx2duyHE",
       "is_blocking" => false,
       "object" => "model_permission",
       "organization" => "*"
     }
   ],
   root: "davinci-search-query"
 }}

See: https://platform.openai.com/docs/api-reference/models/retrieve

completions(params)

It returns one or more predicted completions given a prompt. The function accepts as arguments the "engine_id" and the set of parameters used by the Completions OpenAI api

Example request

  OpenAI.completions(
    model: "finetuned-model",
    prompt: "once upon a time",
    max_tokens: 5,
    temperature: 1,
    ...
  )

Example response

## Example response
  {:ok, %{
    choices: [
      %{
        "finish_reason" => "length",
        "index" => 0,
        "logprobs" => nil,
        "text" => "\" thing we are given"
      }
    ],
    created: 1617147958,
    id: "...",
    model: "...",
    object: "text_completion"
    }
  }

See: https://platform.openai.com/docs/api-reference/completions/create

completions(engine_id, params) (DEPRECATED)

this API has been deprecated by OpenAI, as engines are replaced by models. If you are using it consider to switch to completions(params) ASAP!

Example request

  OpenAI.completions(
    "davinci", # engine_id
    prompt: "once upon a time",
    max_tokens: 5,
    temperature: 1,
    ...
)

Example response

{:ok, %{
  choices: [
    %{
      "finish_reason" => "length",
      "index" => 0,
      "logprobs" => nil,
      "text" => "\" thing we are given"
    }
  ],
  created: 1617147958,
  id: "...",
  model: "...",
  object: "text_completion"
  }
}

See: https://beta.openai.com/docs/api-reference/completions/create for the complete list of parameters you can pass to the completions function

chat_completion()

Creates a completion for the chat message

Example request

OpenAI.chat_completion(
  model: "gpt-3.5-turbo",
  messages: [
        %{role: "system", content: "You are a helpful assistant."},
        %{role: "user", content: "Who won the world series in 2020?"},
        %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
        %{role: "user", content: "Where was it played?"}
    ]
)

Example response

{:ok,
     %{
       choices: [
         %{
           "finish_reason" => "stop",
           "index" => 0,
           "message" => %{
             "content" =>
               "The 2020 World Series was played at Globe Life Field in Arlington, Texas due to the COVID-19 pandemic.",
             "role" => "assistant"
           }
         }
       ],
       created: 1_677_773_799,
       id: "chatcmpl-6pftfA4NO9pOQIdxao6Z4McDlx90l",
       model: "gpt-3.5-turbo-0301",
       object: "chat.completion",
       usage: %{
         "completion_tokens" => 26,
         "prompt_tokens" => 56,
         "total_tokens" => 82
       }
     }}

See: https://platform.openai.com/docs/api-reference/chat/create for the complete list of parameters you can pass to the completions function

chat_completion() with stream

Creates a completion for the chat message, by default it streams to self(), but you can override the configuration by passing a config override to the function with a different stream_to http_options parameter.

Example request

import Config

config :openai,
  api_key: "your-api-key",
  http_options: [recv_timeout: :infinity, async: :once],
  ...

http_options must be set as above when you want to treat the chat completion as a stream.

OpenAI.chat_completion([
    model: "gpt-3.5-turbo",
    messages: [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "Who won the world series in 2020?"},
      %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
      %{role: "user", content: "Where was it played?"}
    ],
    stream: true, # set this param to true
  ]
)
|> Stream.each(fn res ->
  IO.inspect(res)
end)
|> Stream.run()

Example response

%{
  "choices" => [
    %{"delta" => %{"role" => "assistant"}, "finish_reason" => nil, "index" => 0}
  ],
  "created" => 1682700668,
  "id" => "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
  "model" => "gpt-3.5-turbo-0301",
  "object" => "chat.completion.chunk"
}
%{
  "choices" => [
    %{"delta" => %{"content" => "The"}, "finish_reason" => nil, "index" => 0}
  ],
  "created" => 1682700668,
  "id" => "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
  "model" => "gpt-3.5-turbo-0301",
  "object" => "chat.completion.chunk"
}
%{
  "choices" => [
    %{"delta" => %{"content" => " World"}, "finish_reason" => nil, "index" => 0}
  ],
  "created" => 1682700668,
  "id" => "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
  "model" => "gpt-3.5-turbo-0301",
  "object" => "chat.completion.chunk"
}
%{
  "choices" => [
    %{
      "delta" => %{"content" => " Series"},
      "finish_reason" => nil,
      "index" => 0
    }
  ],
  "created" => 1682700668,
  "id" => "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
  "model" => "gpt-3.5-turbo-0301",
  "object" => "chat.completion.chunk"
}
%{
  "choices" => [
    %{"delta" => %{"content" => " in"}, "finish_reason" => nil, "index" => 0}
  ],
  "created" => 1682700668,
  "id" => "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
  "model" => "gpt-3.5-turbo-0301",
  "object" => "chat.completion.chunk"
}
...

edits()

Creates a new edit for the provided input, instruction, and parameters

Example request

OpenAI.edits(
  model: "text-davinci-edit-001",
  input: "What day of the wek is it?",
  instruction: "Fix the spelling mistakes"
)

Example response

{:ok,
  %{
   choices: [%{"index" => 0, "text" => "What day of the week is it?\n"}],
   created: 1675443483,
   object: "edit",
   usage: %{
     "completion_tokens" => 28,
     "prompt_tokens" => 25,
     "total_tokens" => 53
  }
}}

See: https://platform.openai.com/docs/api-reference/edits/create

images_generations(params)

This generates an image based on the given prompt. Image functions require some times to execute, and API may return a timeout error, if needed you can pass an optional configuration struct with HTTPoison http_options as second argument of the function to increase the timeout.

Example request

OpenAI.images_generations(
    [prompt: "A developer writing a test", size: "256x256"],
    %OpenAI.Config{http_options: [recv_timeout: 10 * 60 * 1000]} # optional!
 )

Example response

{:ok,
 %{
   created: 1670341737,
   data: [
     %{
       "url" => ...Returned url
     }
   ]
 }}

Note: this api signature has changed in v0.3.0 to be compliant with the conventions of other APIs, the alias OpenAI.image_generations(params, request_options) is still available for retrocompatibility. If you are using it consider to switch to OpenAI.images_generations(params, request_options) ASAP.

Note2: the official way of passing http_options changed in v0.5.0 to be compliant with the conventions of other APIs, the alias OpenAI.images_generations(file_path, params, request_options), but is still available for retrocompatibility. If you are using it consider to switch to OpenAI.images_variations(params, config)

See: https://platform.openai.com/docs/api-reference/images/create

images_edits(file_path, params)

Edit an existing image based on prompt Image functions require some times to execute, and API may return a timeout error, if needed you can pass an optional configuration struct with HTTPoison http_options as second argument of the function to increase the timeout.

Example Request

OpenAI.images_edits(
     "/home/developer/myImg.png",
     [prompt: "A developer writing a test", size: "256x256"],
    %OpenAI.Config{http_options: [recv_timeout: 10 * 60 * 1000]} # optional!
 )

Example Response

{:ok,
 %{
   created: 1670341737,
   data: [
     %{
       "url" => ...Returned url
     }
   ]
 }}

Note: the official way of passing http_options changed in v0.5.0 to be compliant with the conventions of other APIs, the alias OpenAI.images_edits(file_path, params, request_options), but is still available for retrocompatibility. If you are using it consider to switch to OpenAI.images_edits(file_path, params, config)

See: https://platform.openai.com/docs/api-reference/images/create-edit

images_variations(file_path, params)

Image functions require some times to execute, and API may return a timeout error, if needed you can pass an optional configuration struct with HTTPoison http_options as second argument of the function to increase the timeout.

Example Request

OpenAI.images_variations(
    "/home/developer/myImg.png",
    [n: "5"],
    %OpenAI.Config{http_options: [recv_timeout: 10 * 60 * 1000]} # optional!
)

Example Response

{:ok,
 %{
   created: 1670341737,
   data: [
     %{
       "url" => ...Returned url
     }
   ]
 }}

Note: the official way of passing http_options changed in v0.5.0 to be compliant with the conventions of other APIs, the alias OpenAI.images_variations(file_path, params, request_options), but is still available for retrocompatibility. If you are using it consider to switch to OpenAI.images_edits(file_path, params, config)

See: https://platform.openai.com/docs/api-reference/images/create-variation

embeddings(params)

Example request

OpenAI.embeddings(
    model: "text-embedding-ada-002",
    input: "The food was delicious and the waiter..."
  )

Example response

{:ok,
  %{
   data: [
     %{
       "embedding" => [0.0022523515000000003, -0.009276069000000001,
        0.015758524000000003, -0.007790373999999999, -0.004714223999999999,
        0.014806155000000001, -0.009803046499999999, -0.038323310000000006,
        -0.006844355, -0.028672641, 0.025345700000000002, 0.018145794000000003,
        -0.0035904291999999997, -0.025498080000000003, 5.142790000000001e-4,
        -0.016317246, 0.028444072, 0.0053713582, 0.009631619999999999,
        -0.016469626, -0.015390275, 0.004301531, 0.006984035499999999,
        -0.007079272499999999, -0.003926933, 0.018602932000000003, 0.008666554,
        -0.022717162999999995, 0.011460166999999997, 0.023860006,
        0.015568050999999998, -0.003587254600000001, -0.034843990000000005,
        -0.0041555012999999995, -0.026107594000000005, -0.02151083,
        -0.0057618289999999996, 0.011714132499999998, 0.008355445999999999,
        0.004098358999999999, 0.019199749999999998, -0.014336321, 0.008952264,
        0.0063395994, -0.04576447999999999, ...],
       "index" => 0,
       "object" => "embedding"
     }
   ],
   model: "text-embedding-ada-002-v2",
   object: "list",
   usage: %{"prompt_tokens" => 8, "total_tokens" => 8}
  }}

See: https://platform.openai.com/docs/api-reference/embeddings/create

audio_speech(params)

Generates audio from the input text.

Example request

OpenAI.audio_speech(
  model: "tts-1",
  input: "You know that Voight-Kampf test of yours. Did you ever take that test yourself?",
  voice: "alloy"
)

Example response

  {:ok, <<255, 255, ...>>}

See: https://platform.openai.com/docs/api-reference/audio/create to get info on the params accepted by the api

audio_transcription(file_path, params)

Transcribes audio into the input language.

Example request

OpenAI.audio_transcription(
  "./path_to_file/blade_runner.mp3", # file path
  model: "whisper-1"
)

Example response

 {:ok,
  %{
   text: "I've seen things you people wouldn't believe.."
  }}

See: https://platform.openai.com/docs/api-reference/audio/create to get info on the params accepted by the api

audio_translation(file_path, params)

Translates audio into into English.

Example request

OpenAI.audio_translation(
  "./path_to_file/werner_herzog_interview.mp3", # file path
  model: "whisper-1"
)

Example response

{:ok,
  %{
    text:  "I thought if I walked, I would be saved. It was almost like a pilgrimage. I will definitely continue to walk long distances. It is a very unique form of life and existence that we have lost almost entirely from our normal life."
  }
}

See: https://platform.openai.com/docs/api-reference/audio/create to get info on the params accepted by the api

files()

Returns a list of files that belong to the user's organization.

Example request

OpenAI.files()

Example response

{:ok,
 %{
 data: [
   %{
     "bytes" => 123,
     "created_at" => 213,
     "filename" => "file.jsonl",
     "id" => "file-123321",
     "object" => "file",
     "purpose" => "fine-tune",
     "status" => "processed",
     "status_details" => nil
   }
 ],
 object: "list"
 }
}

See: https://platform.openai.com/docs/api-reference/files

files(file_id)

Returns a file that belong to the user's organization, given a file id

Example request

OpenAI.files("file-123321")

Example response

{:ok,
%{
  bytes: 923,
  created_at: 1675370979,
  filename: "file.jsonl",
  id: "file-123321",
  object: "file",
  purpose: "fine-tune",
  status: "processed",
  status_details: nil
}
}

See: https://platform.openai.com/docs/api-reference/files/retrieve

files_upload(file_path, params)

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact OpenAI if you need to increase the storage limit.

Example request

OpenAI.files_upload("./file.jsonl", purpose: "fine-tune")

Example response

{:ok,
  %{
    bytes: 923,
    created_at: 1675373519,
    filename: "file.jsonl",
    id: "file-123",
    object: "file",
    purpose: "fine-tune",
    status: "uploaded",
    status_details: nil
  }
}

See: https://platform.openai.com/docs/api-reference/files/upload

files_delete(file_id)

delete a file

Example request

OpenAI.files_delete("file-123")

Example response

{:ok, %{deleted: true, id: "file-123", object: "file"}}

See: https://platform.openai.com/docs/api-reference/files/delete

finetunes()

List your organization's fine-tuning jobs.

Example request

OpenAI.finetunes()

Example response

{:ok,
  %{
    object: "list",
    data: [%{
      "id" => "t-AF1WoRqd3aJAHsqc9NY7iL8F",
      "object" => "fine-tune",
      "model" => "curie",
      "created_at" => 1614807352,
      "fine_tuned_model" => null,
      "hyperparams" => { ... },
      "organization_id" => "org-...",
      "result_files" = [],
      "status": "pending",
      "validation_files" => [],
      "training_files" => [ { ... } ],
      "updated_at" => 1614807352,
    }],
  }
}

See: https://platform.openai.com/docs/api-reference/fine-tunes/list

finetunes(finetune_id)

Gets info about a fine-tune job.

Example request

OpenAI.finetunes("t-AF1WoRqd3aJAHsqc9NY7iL8F")

Example response

{:ok,
  %{
    object: "list",
    data: [%{
      "id" => "t-AF1WoRqd3aJAHsqc9NY7iL8F",
      "object" => "fine-tune",
      "model" => "curie",
      "created_at" => 1614807352,
      "fine_tuned_model" => null,
      "hyperparams" => { ... },
      "organization_id" => "org-...",
      "result_files" = [],
      "status": "pending",
      "validation_files" => [],
      "training_files" => [ { ... } ],
      "updated_at" => 1614807352,
    }],
  }
}

See: https://platform.openai.com/docs/api-reference/fine-tunes/retrieve

finetunes_create(params)

Creates a job that fine-tunes a specified model from a given dataset.

Example request

OpenAI.finetunes_create(
  training_file: "file-123213231",
  model: "curie",
)

Example response

{:ok,
 %{
   created_at: 1675527767,
   events: [
     %{
       "created_at" => 1675527767,
       "level" => "info",
       "message" => "Created fine-tune: ft-IaBYfSSAK47UUCbebY5tBIEj",
       "object" => "fine-tune-event"
     }
   ],
   fine_tuned_model: nil,
   hyperparams: %{
     "batch_size" => nil,
     "learning_rate_multiplier" => nil,
     "n_epochs" => 4,
     "prompt_loss_weight" => 0.01
   },
   id: "ft-IaBYfSSAK47UUCbebY5tBIEj",
   model: "curie",
   object: "fine-tune",
   organization_id: "org-1iPTOIak4b5fpuIB697AYMmO",
   result_files: [],
   status: "pending",
   training_files: [
     %{
       "bytes" => 923,
       "created_at" => 1675373519,
       "filename" => "file-12321323.jsonl",
       "id" => "file-12321323",
       "object" => "file",
       "purpose" => "fine-tune",
       "status" => "processed",
       "status_details" => nil
     }
   ],
   updated_at: 1675527767,
   validation_files: []
 }}

See: https://platform.openai.com/docs/api-reference/fine-tunes/create

finetunes_list_events(finetune_id)

Get fine-grained status updates for a fine-tune job.

Example request

OpenAI.finetunes_list_events("ft-AF1WoRqd3aJAHsqc9NY7iL8F")

Example response

{:ok,
  %{
   data: [
     %{
       "created_at" => 1675376995,
       "level" => "info",
       "message" => "Created fine-tune: ft-123",
       "object" => "fine-tune-event"
     },
     %{
       "created_at" => 1675377104,
       "level" => "info",
       "message" => "Fine-tune costs $0.00",
       "object" => "fine-tune-event"
     },
     %{
       "created_at" => 1675377105,
       "level" => "info",
       "message" => "Fine-tune enqueued. Queue number: 18",
       "object" => "fine-tune-event"
     },
    ...,
     ]
    }
  }

See: https://platform.openai.com/docs/api-reference/fine-tunes/events

finetunes_cancel(finetune_id)

Immediately cancel a fine-tune job.

Example request

OpenAI.finetunes_cancel("ft-AF1WoRqd3aJAHsqc9NY7iL8F")

Example response

  {:ok,
  %{
   created_at: 1675527767,
   events: [
     ...
     %{
       "created_at" => 1675528080,
       "level" => "info",
       "message" => "Fine-tune cancelled",
       "object" => "fine-tune-event"
     }
   ],
   fine_tuned_model: nil,
   hyperparams: %{
     "batch_size" => 1,
     "learning_rate_multiplier" => 0.1,
     "n_epochs" => 4,
     "prompt_loss_weight" => 0.01
   },
   id: "ft-IaBYfSSAK47UUCbebY5tBIEj",
   model: "curie",
   object: "fine-tune",
   organization_id: "org-1iPTOIak4b5fpuIB697AYMmO",
   result_files: [],
   status: "cancelled",
   training_files: [
     %{
       "bytes" => 923,
       "created_at" => 1675373519,
       "filename" => "file123.jsonl",
       "id" => "file-123",
       "object" => "file",
       "purpose" => "fine-tune",
       "status" => "processed",
       "status_details" => nil
     }
   ],
   updated_at: 1675528080,
   validation_files: []
  }}

finetunes_delete_model(finetune_id)

Immediately cancel a fine-tune job.

Example request

OpenAI.finetunes_delete_model("model-id")

Example response

{:ok,
  %{
   id: "model-id",
   object: "model",
   deleted: true
  }
}

See: https://platform.openai.com/docs/api-reference/fine-tunes/delete-model

moderations(params)

Classifies if text violates OpenAI's Content Policy

Example request

OpenAI.moderations(input: "I want to kill everyone!")

Example response

{:ok,
  %{
   id: "modr-6gEWXyuaU8dqiHpbAHIsdru0zuC88",
   model: "text-moderation-004",
   results: [
     %{
       "categories" => %{
         "hate" => false,
         "hate/threatening" => false,
         "self-harm" => false,
         "sexual" => false,
         "sexual/minors" => false,
         "violence" => true,
         "violence/graphic" => false
       },
       "category_scores" => %{
         "hate" => 0.05119025334715844,
         "hate/threatening" => 0.00321022979915142,
         "self-harm" => 7.337320857914165e-5,
         "sexual" => 1.1111642379546538e-6,
         "sexual/minors" => 3.588798147546868e-10,
         "violence" => 0.9190407395362855,
         "violence/graphic" => 1.2791929293598514e-7
       },
       "flagged" => true
     }
   ]
  }}

See: https://platform.openai.com/docs/api-reference/moderations/create

Beta APIs

The following APIs are currently in beta, to use them be sure to set the beta parameter in your config.

config :openai,
  # optional, use when required by an OpenAI API beta, e.g.:
  beta: "assistants=v1"

assistants()

Retrieves the list of assistants.

Example request

OpenAI.assistants()

Example response

{:ok,
%{
  data: [
    %{
      "created_at" => 1699472932,
      "description" => nil,
      "file_ids" => ["file-..."],
      "id" => "asst_...",
      "instructions" => "...",
      "metadata" => %{},
      "model" => "gpt-4-1106-preview",
      "name" => "...",
      "object" => "assistant",
      "tools" => [%{"type" => "retrieval"}]
    }
  ],
  first_id: "asst_...",
  has_more: false,
  last_id: "asst_...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/assistants/listAssistants

assistants(params)

Retrieves the list of assistants filtered by query params.

Example request

OpenAI.assistants(after: "", limit: 10)

Example response

{:ok,
%{
  data: [
    %{
      "created_at" => 1699472932,
      "description" => nil,
      "file_ids" => ["file-..."],
      "id" => "asst_...",
      "instructions" => "...",
      "metadata" => %{},
      "model" => "gpt-4-1106-preview",
      "name" => "...",
      "object" => "assistant",
      "tools" => [%{"type" => "retrieval"}]
    },
    ...
  ],
  first_id: "asst_...",
  has_more: false,
  last_id: "asst_...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/assistants/listAssistants

assistants(assistant_id)

Retrieves an assistant by its id.

Example request

OpenAI.assistants("asst_...")

Example response

{:ok,
%{
  created_at: 1699472932,
  description: nil,
  file_ids: ["file-..."],
  id: "asst_...",
  instructions: "...",
  metadata: %{},
  model: "gpt-4-1106-preview",
  name: "...",
  object: "assistant",
  tools: [%{"type" => "retrieval"}]
}}

See: https://platform.openai.com/docs/api-reference/assistants/getAssistant

assistants_create(params)

Creates a new assistant.

Example request

OpenAI.assistants_create(
  model: "gpt-3.5-turbo-1106",
  name: "My assistant",
  instructions: "You are a research assistant.",
  tools: [
    %{type: "retrieval"}
  ],
  file_ids: ["file-..."]
)

Example response

{:ok,
%{
  created_at: 1699640038,
  description: nil,
  file_ids: ["file-..."],
  id: "asst_...",
  instructions: "You are a research assistant.",
  metadata: %{},
  model: "gpt-3.5-turbo-1106",
  name: "My assistant",
  object: "assistant",
  tools: [%{"type" => "retrieval"}]
}}

See: https://platform.openai.com/docs/api-reference/assistants/createAssistant

assistants_modify(assistant_id, params)

Modifies an existing assistant.

Example request

OpenAI.assistants_modify(
  "asst_...",
  model: "gpt-4-1106-preview",
  name: "My upgraded assistant"
)

Example response

{:ok,
%{
  created_at: 1699640038,
  description: nil,
  file_ids: ["file-..."],
  id: "asst_...",
  instructions: "You are a research assistant.",
  metadata: %{},
  model: "gpt-4-1106-preview",
  name: "My upgraded assistant"
  object: "assistant",
  tools: [%{"type" => "retrieval"}]
}}

See: https://platform.openai.com/docs/api-reference/assistants/modifyAssistant

assistants_delete(assistant_id)

Deletes an assistant.

Example request

OpenAI.assistants_delete("asst_...")

Example response

{:ok,
%{
  deleted: true,
  id: "asst_...",
  object: "assistant.deleted"
}}

See: https://platform.openai.com/docs/api-reference/assistants/deleteAssistant

assistant_files(assistant_id)

Retrieves the list of files associated with a particular assistant.

Example request

OpenAI.assistant_files("asst_...")

Example response

{:ok,
%{
  data: [
    %{
      "assistant_id" => "asst_...",
      "created_at" => 1699472933,
      "id" => "file-...",
      "object" => "assistant.file"
    }
  ],
  first_id: "file-...",
  has_more: false,
  last_id: "file-...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/assistants/listAssistantFiles

assistant_files(assistant_id, params)

Retrieves the list of files associated with a particular assistant, filtered by query params.

Example request

OpenAI.assistant_files("asst_...", order: "desc")

Example response

{:ok,
%{
  data: [
    %{
      "assistant_id" => "asst_...",
      "created_at" => 1699472933,
      "id" => "file-...",
      "object" => "assistant.file"
    }
  ],
  first_id: "file-...",
  has_more: false,
  last_id: "file-...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/assistants/listAssistantFiles

assistant_file(assistant_id, file_id)

Retrieves an assistant file by its id

Example request

OpenAI.assistant_file("asst_...", "file_...")

Example response

{:ok,
%{
  assistant_id: "asst_...",
  created_at: 1699472933,
  id: "file-...",
  object: "assistant.file"
}}

See: https://platform.openai.com/docs/api-reference/assistants/getAssistantFile

assistant_file_create(assistant_id, params)

Attaches a previously uploaded file to the assistant.

Example request

OpenAI.assistant_file_create("asst_...", file_id: "file-...")

Example response

{:ok,
%{
  assistant_id: "asst_...",
  created_at: 1699472933,
  id: "file-...",
  object: "assistant.file"
}}

See: https://platform.openai.com/docs/api-reference/assistants/createAssistantFile

assistant_file_delete(assistant_id, file_id)

Detaches a file from the assistant. The file itself is not automatically deleted.

Example request

OpenAI.assistant_file_delete("asst_...", "file-...")

Example response

{:ok,
%{
  deleted: true,
  id: "file-...",
  object: "assistant.file.deleted"
}}

See: https://platform.openai.com/docs/api-reference/assistants/deleteAssistantFile

threads()

Retrieves the list of threads. NOTE: At the time of this writing this functionality remains undocumented by OpenAI.

Example request

OpenAI.threads()

Example response

{:ok,
%{
  data: [
    %{
      "created_at" => 1699705727,
      "id" => "thread_...",
      "metadata" => %{"key_1" => "value 1", "key_2" => "value 2"},
      "object" => "thread"
    },
    ...
  ],
  first_id: "thread_...",
  has_more: false,
  last_id: "thread_...",
  object: "list"
}}

threads(params)

Retrieves the list of threads by query params. NOTE: At the time of this writing this functionality remains undocumented by OpenAI.

Example request

OpenAI.threads(limit: 2)

Example response

{:ok,
%{
  data: [
    %{
      "created_at" => 1699705727,
      "id" => "thread_...",
      "metadata" => %{"key_1" => "value 1", "key_2" => "value 2"},
      "object" => "thread"
    },
    ...
  ],
  first_id: "thread_...",
  has_more: false,
  last_id: "thread_...",
  object: "list"
}}

threads_create(params)

Creates a new thread with some messages and metadata.

Example request

messages = [
  %{
    role: "user",
    content: "Hello, what is AI?",
    file_ids: ["file-..."]
  },
  %{
    role: "user",
    content: "How does AI work? Explain it in simple terms."
  },
]
metadata = %{
  key_1: "value 1",
  key_2: "value 2"
}
OpenAI.threads_create(messages: messages, metadata: metadata)

Example response

{:ok,
%{
  created_at: 1699703890,
  id: "thread_...",
  metadata: %{"key_1" => "value 1", "key_2" => "value 2"},
  object: "thread"
}}

See: https://platform.openai.com/docs/api-reference/threads/createThread

threads_create_and_run(params)

Creates a new thread and runs it.

Example request

messages = [
  %{
    role: "user",
    content: "Hello, what is AI?",
    file_ids: ["file-..."]
  },
  %{
    role: "user",
    content: "How does AI work? Explain it in simple terms."
  },
]

thread_metadata = %{
  key_1: "value 1",
  key_2: "value 2"
}

thread = %{
  messages: messages,
  metadata: thread_metadata
}

run_metadata = %{
  key_3: "value 3"
}

params = [
  assistant_id: "asst_...",
  thread: thread,
  model: "gpt-4-1106-preview",
  instructions: "You are an AI learning assistant.",
  tools: [%{
    "type" => "retrieval"
  }],
  metadata: run_metadata
]

OpenAI.threads_create_and_run(params)

Example response

{:ok,
%{
  assistant_id: "asst_...",
  cancelled_at: nil,
  completed_at: nil,
  created_at: 1699897907,
  expires_at: 1699898507,
  failed_at: nil,
  file_ids: ["file-..."],
  id: "run_...",
  instructions: "You are an AI learning assistant.",
  last_error: nil,
  metadata: %{"key_3" => "value 3"},
  model: "gpt-4-1106-preview",
  object: "thread.run",
  started_at: nil,
  status: "queued",
  thread_id: "thread_...",
  tools: [%{"type" => "retrieval"}]
}}

See: https://platform.openai.com/docs/api-reference/runs/createThreadAndRun

threads_modify(thread_id, params)

Modifies an existing thread.

Example request

metadata = %{
  key_3: "value 3"
}

OpenAI.threads_modify("thread_...", metadata: metadata)

Example response

{:ok,
%{
  created_at: 1699704406,
  id: "thread_...",
  metadata: %{"key_1" => "value 1", "key_2" => "value 2", "key_3" => "value 3"},
  object: "thread"
}}

See: https://platform.openai.com/docs/api-reference/threads/modifyThread

threads_delete(thread_id)

Modifies an existing thread.

Example request

OpenAI.threads_delete("thread_...")

Example response

{:ok,
%{
  deleted: true,
  id: "thread_...",
  object: "thread.deleted"
}}

See: https://platform.openai.com/docs/api-reference/threads/deleteThread

thread_messages(thread_id)

Retrieves the list of messages associated with a particular thread.

Example request

OpenAI.thread_messages("thread_...")

Example response

{:ok,
%{
  data: [
    %{
      "assistant_id" => nil,
      "content" => [
        %{
          "text" => %{
            "annotations" => [],
            "value" => "How does AI work? Explain it in simple terms."
          },
          "type" => "text"
        }
      ],
      "created_at" => 1699705727,
      "file_ids" => [],
      "id" => "msg_...",
      "metadata" => %{},
      "object" => "thread.message",
      "role" => "user",
      "run_id" => nil,
      "thread_id" => "thread_..."
    },
    ...
  ],
  first_id: "msg_...",
  has_more: false,
  last_id: "msg_...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/messages/listMessages

thread_messages(thread_id, params)

Retrieves the list of messages associated with a particular thread, filtered by query params.

Example request

OpenAI.thread_messages("thread_...", after: "msg_...")

Example response

{:ok,
%{
  data: [
    %{
      "assistant_id" => nil,
      "content" => [
        %{
          "text" => %{
            "annotations" => [],
            "value" => "How does AI work? Explain it in simple terms."
          },
          "type" => "text"
        }
      ],
      "created_at" => 1699705727,
      "file_ids" => [],
      "id" => "msg_...",
      "metadata" => %{},
      "object" => "thread.message",
      "role" => "user",
      "run_id" => nil,
      "thread_id" => "thread_..."
    },
    ...
  ],
  first_id: "msg_...",
  has_more: false,
  last_id: "msg_...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/messages/listMessages

thread_message(thread_id, message_id)

Retrieves a thread message by its id.

Example request

OpenAI.thread_message("thread_...", "msg_...")

Example response

 {:ok,
  %{
    assistant_id: nil,
    content: [
      %{
        "text" => %{"annotations" => [], "value" => "Hello, what is AI?"},
        "type" => "text"
      }
    ],
    created_at: 1699705727,
    file_ids: ["file-..."],
    id: "msg_...",
    metadata: %{},
    object: "thread.message",
    role: "user",
    run_id: nil,
    thread_id: "thread_..."
}}

See: https://platform.openai.com/docs/api-reference/messages/getMessage

create_thread_message(thread_id, params)

Creates a message within a thread.

Example request

params = [
  role: "user",
  content: "Hello, what is AI?",
  file_ids: ["file-9Riyo515uf9KVfwdSrIQiqtC"],
  metadata: %{
    key_1: "value 1",
    key_2: "value 2"
  }
]
OpenAI.thread_message_create("thread_...", params)

Example response

{:ok,
%{
  assistant_id: nil,
  content: [
    %{
      "text" => %{"annotations" => [], "value" => "Hello, what is AI?"},
      "type" => "text"
    }
  ],
  created_at: 1699706818,
  file_ids: ["file-..."],
  id: "msg_...",
  metadata: %{"key_1" => "value 1", "key_2" => "value 2"},
  object: "thread.message",
  role: "user",
  run_id: nil,
  thread_id: "thread_..."
}}

See: https://platform.openai.com/docs/api-reference/messages/createMessage

thread_message_modify(thread_id, message_id, params)

Creates a message within a thread.

Example request

params = [
  metadata: %{
    key_3: "value 3"
  }
]

OpenAI.thread_message_modify("thread_...", "msg_...", params)

Example response

{:ok,
%{
  assistant_id: nil,
  content: [
    %{
      "text" => %{"annotations" => [], "value" => "Hello, what is AI?"},
      "type" => "text"
    }
  ],
  created_at: 1699706818,
  file_ids: ["file-..."],
  id: "msg_...",
  metadata: %{"key_1" => "value 1", "key_2" => "value 2", "key_3" => "value 3"},
  object: "thread.message",
  role: "user",
  run_id: nil,
  thread_id: "thread_..."
}}

See: https://platform.openai.com/docs/api-reference/messages/modifyMessage

thread_message_files(thread_id, message_id)

Retrieves the list of files associated with a particular message of a thread.

Example request

OpenAI.thread_message_files("thread_...", "msg_...")

Example response

{:ok,
%{
  data: [
    %{
      "created_at" => 1699706818,
      "id" => "file-...",
      "message_id" => "msg_...",
      "object" => "thread.message.file"
    }
  ],
  first_id: "file-...",
  has_more: false,
  last_id: "file-...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/messages/listMessageFiles

thread_message_files(thread_id, message_id, params)

Retrieves the list of files associated with a particular message of a thread, filtered by query params.

Example request

OpenAI.thread_message_files("thread_...", "msg_...", after: "file-...")

Example response

{:ok,
%{
  data: [
    %{
      "created_at" => 1699706818,
      "id" => "file-...",
      "message_id" => "msg_...",
      "object" => "thread.message.file"
    }
  ],
  first_id: "file-...",
  has_more: false,
  last_id: "file-...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/messages/listMessageFiles

thread_message_file(thread_id, message_id, file_id)

Retrieves the message file object.

Example request

OpenAI.thread_message_file("thread_...", "msg_...", "file-...")

Example response

{:ok,
%{
  created_at: 1699706818,
  id: "file-...",
  message_id: "msg_...",
  object: "thread.message.file"
}}

See: https://platform.openai.com/docs/api-reference/messages/getMessageFile

thread_runs(thread_id, params)

Retrieves the list of runs associated with a particular thread, filtered by query params.

Example request

OpenAI.thread_runs("thread_...", limit: 10)

Example response

{:ok, %{
  data: [],
  first_id: nil,
  has_more: false,
  last_id: nil,
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/runs/listRuns

thread_run(thread_id, run_id)

Retrieves a particular thread run by its id.

Example request

OpenAI.thread_run("thread_...", "run_...")

Example response

{:ok,
 %{
   assistant_id: "asst_J",
   cancelled_at: nil,
   completed_at: 1700234149,
   created_at: 1700234128,
   expires_at: nil,
   failed_at: nil,
   file_ids: [],
   id: "run_",
   instructions: "You are an AI learning assistant.",
   last_error: nil,
   metadata: %{"key_3" => "value 3"},
   model: "gpt-4-1106-preview",
   object: "thread.run",
   started_at: 1700234129,
   status: "expired",
   thread_id: "thread_",
   tools: [%{"type" => "retrieval"}]
 }}

See: https://platform.openai.com/docs/api-reference/runs/getRun

thread_run_create(thread_id, params)

Creates a run for a thread using a particular assistant.

Example request

params = [
  assistant_id: "asst_...",
  model: "gpt-4-1106-preview",
  tools: [%{
    "type" => "retrieval"
  }]
]
OpenAI.thread_run_create("thread_...", params)

Example response

{:ok,
%{
  assistant_id: "asst_...",
  cancelled_at: nil,
  completed_at: nil,
  created_at: 1699711115,
  expires_at: 1699711715,
  failed_at: nil,
  file_ids: ["file-..."],
  id: "run_...",
  instructions: "...",
  last_error: nil,
  metadata: %{},
  model: "gpt-4-1106-preview",
  object: "thread.run",
  started_at: nil,
  status: "queued",
  thread_id: "thread_...",
  tools: [%{"type" => "retrieval"}]
}}

See: https://platform.openai.com/docs/api-reference/runs/createRun

thread_run_modify(thread_id, run_id, params)

Modifies an existing thread run.

Example request

params = [
  metadata: %{
    key_3: "value 3"
  }
]
OpenAI.thread_run_modify("thread_...", "run_...", params)

Example response

 {:ok,
%{
  assistant_id: "asst_...",
  cancelled_at: nil,
  completed_at: 1699711125,
  created_at: 1699711115,
  expires_at: nil,
  failed_at: nil,
  file_ids: ["file-..."],
  id: "run_...",
  instructions: "...",
  last_error: nil,
  metadata: %{"key_3" => "value 3"},
  model: "gpt-4-1106-preview",
  object: "thread.run",
  started_at: 1699711115,
  status: "expired",
  thread_id: "thread_...",
  tools: [%{"type" => "retrieval"}]
}}

See: https://platform.openai.com/docs/api-reference/runs/modifyRun

thread_run_submit_tool_outputs(thread_id, run_id, params)

When a run has the status: "requires_action" and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.

Example request

params = [
  tool_outputs: [%{
    tool_call_id: "call_abc123",
    output: "test"
  }]
]
OpenAI.thread_run_submit_tool_outputs("thread_...", "run_...", params)

Example response

{:ok,
  %{
    assistant_id: "asst_abc123",
    cancelled_at: nil,
    completed_at: nil,
    created_at: 1699075592,
    expires_at: 1699076192,
    failed_at: nil,
    file_ids: [],
    id: "run_abc123",
    instructions: "You tell the weather.",
    last_error: nil,
    metadata: %{},
    model: "gpt-4",
    object: "thread.run",
    started_at: 1699075592,
    status: "queued",
    thread_id: "thread_abc123",
    tools: [
      %{
        "function" => %{
          "description" => "Determine weather in my location",
          "name" => "get_weather",
          "parameters" => %{
            "properties" => %{
              "location" => %{
                "description" => "The city and state e.g. San Francisco, CA",
                "type" => "string"
              },
              "unit" => %{"enum" => ["c", "f"], "type" => "string"}
            },
            "required" => ["location"],
            "type" => "object"
          }
        },
        "type" => "function"
      }
    ]
  }
}

See: https://platform.openai.com/docs/api-reference/runs/submitToolOutputs

thread_run_cancel(thread_id, run_id)

Cancels an in_progress run.

Example request

OpenAI.thread_run_cancel("thread_...", "run_...")

Example response

 {:ok,
%{
  assistant_id: "asst_...",
  cancelled_at: nil,
  completed_at: 1699711125,
  created_at: 1699711115,
  expires_at: nil,
  failed_at: nil,
  file_ids: ["file-..."],
  id: "run_...",
  instructions: "...",
  last_error: nil,
  metadata: %{"key_3" => "value 3"},
  model: "gpt-4-1106-preview",
  object: "thread.run",
  started_at: 1699711115,
  status: "expired",
  thread_id: "thread_...",
  tools: [%{"type" => "retrieval"}]
}}

See: https://platform.openai.com/docs/api-reference/runs/cancelRun

thread_run_steps(thread_id, run_id)

Retrieves the list of steps associated with a particular run of a thread.

Example request

OpenAI.thread_run_steps("thread_...", "run_...")

Example response

 {:ok,
%{
  data: [
    %{
      "assistant_id" => "asst_...",
      "cancelled_at" => nil,
      "completed_at" => 1699897927,
      "created_at" => 1699897908,
      "expires_at" => nil,
      "failed_at" => nil,
      "id" => "step_...",
      "last_error" => nil,
      "object" => "thread.run.step",
      "run_id" => "run_...",
      "status" => "completed",
      "step_details" => %{
        "message_creation" => %{"message_id" => "msg_..."},
        "type" => "message_creation"
      },
      "thread_id" => "thread_...",
      "type" => "message_creation"
    }
  ],
  first_id: "step_...",
  has_more: false,
  last_id: "step_...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/runs/listRunSteps

thread_run_steps(thread_id, run_id, params)

Retrieves the list of steps associated with a particular run of a thread, filtered by query params.

Example request

OpenAI.thread_run_steps("thread_...", "run_...", order: "asc")

Example response

 {:ok,
%{
  data: [
    %{
      "assistant_id" => "asst_...",
      "cancelled_at" => nil,
      "completed_at" => 1699897927,
      "created_at" => 1699897908,
      "expires_at" => nil,
      "failed_at" => nil,
      "id" => "step_...",
      "last_error" => nil,
      "object" => "thread.run.step",
      "run_id" => "run_...",
      "status" => "completed",
      "step_details" => %{
        "message_creation" => %{"message_id" => "msg_..."},
        "type" => "message_creation"
      },
      "thread_id" => "thread_...",
      "type" => "message_creation"
    }
  ],
  first_id: "step_...",
  has_more: false,
  last_id: "step_...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/runs/listRunSteps

thread_run_steps(thread_id, run_id, params)

Retrieves the list of steps associated with a particular run of a thread, filtered by query params.

Example request

OpenAI.thread_run_steps("thread_...", "run_...", order: "asc")

Example response

 {:ok,
%{
  data: [
    %{
      "assistant_id" => "asst_...",
      "cancelled_at" => nil,
      "completed_at" => 1699897927,
      "created_at" => 1699897908,
      "expires_at" => nil,
      "failed_at" => nil,
      "id" => "step_...",
      "last_error" => nil,
      "object" => "thread.run.step",
      "run_id" => "run_...",
      "status" => "completed",
      "step_details" => %{
        "message_creation" => %{"message_id" => "msg_..."},
        "type" => "message_creation"
      },
      "thread_id" => "thread_...",
      "type" => "message_creation"
    }
  ],
  first_id: "step_...",
  has_more: false,
  last_id: "step_...",
  object: "list"
}}

See: https://platform.openai.com/docs/api-reference/runs/listRunSteps

thread_run_step(thread_id, run_id, step_id)

Retrieves a thread run step by its id.

Example request

OpenAI.thread_run_step("thread_...", "run_...", "step_...")

Example response

{:ok,
%{
  assistant_id: "asst_...",
  cancelled_at: nil,
  completed_at: 1699897927,
  created_at: 1699897908,
  expires_at: nil,
  failed_at: nil,
  id: "step_...",
  last_error: nil,
  object: "thread.run.step",
  run_id: "run_...",
  status: "completed",
  step_details: %{
    "message_creation" => %{"message_id" => "msg_..."},
    "type" => "message_creation"
  },
  thread_id: "thread_...",
  type: "message_creation"
}}

See: https://platform.openai.com/docs/api-reference/runs/getRunStep

Deprecated APIs

The following APIs are deprecated, but currently supported by the library for retrocompatibility with older versions. If you are using the following APIs consider to remove it ASAP from your project!

Note: from version 0.5.0 search, answers, classifications API are not supported (since they has been removed by OpenAI), if you still need them consider to use v0.4.2

engines() (DEPRECATED: use models instead)

Get the list of available engines

Example request

OpenAI.engines()

Example response

{:ok, %{
  "data" => [
    %{"id" => "davinci", "object" => "engine", "max_replicas": ...},
    ...,
    ...
  ]
}

See: https://beta.openai.com/docs/api-reference/engines/list

engines(engine_id)

Retrieve specific engine info

Example request

OpenAI.engines("davinci")

Example response

{:ok, %{
    "id" => "davinci",
    "object" => "engine",
    "max_replicas": ...
  }
}

See: https://beta.openai.com/docs/api-reference/engines/retrieve

License

The package is available as open source under the terms of the MIT License.

openai.ex's People

Contributors

almirsarajcic avatar bfolkens avatar bradhanks avatar bulld0zer avatar darova93 avatar kentaro avatar kianmeng avatar kpanic avatar mgallo avatar miserlou avatar mrmrinal avatar nallwhy avatar nathanalderson avatar nicnilov avatar pedromvieira avatar rwdaigle avatar shawnleong avatar speerj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai.ex's Issues

Library doesn't seem to load default config with Phoenix

I am adding this library to my Phoenix Application.

In my runtime, I've added

case System.get_env("OPENAI_API_KEY") do
  nil -> nil
  key -> config :openai, api_key: key
end

I've validated this populates my api key with OpenAI.Config.api_key(). However, when I call OpenAI.audio_transcription(path, %{model: "whisper-1"}) I get back an error that I haven't populated my API key. From my read of the code, I don't know how this library would ever pick up these defaults. It just calls up an empty %Config{} struct.

Unless there's some magic in here where calling this empty struct picks up defaults from the GenServer state. That said, I've validated the genserver is running in my Phoenix application, and calling %OpenAI.Config{} just returns and empty struct:

%OpenAI.Config{
  api_key: nil,
  organization_key: nil,
  http_options: nil,
  api_url: nil
}

Just to be thorough, I also added the configs to my config.exs as well, in case they needed to be available at compile time. I still have the same issue.

Remove applications key from MixProject.application/0

The applications key in OpenAI.MixProject.application/0 is most likely a relict from older mix versions. I don't think it does serve a purpose anymore and we get compiler warnings in our project when compiling this library.

openai.ex/mix.exs

Lines 22 to 23 in dd48493

applications: [:httpoison, :jason, :logger],
extra_applications: [:logger]

If somebody can confirm that there is no specific reason to keep the applications key around, I'm happy to do a PR!

Intermittent Jason.DecodeError while streaming output

During periods of high volume, and in particular when using some of the gpt-3.5 series models, OpenAI will occasionally split events into multiple chunks. The current approach of splitting each line with "\n" assumes the chunks are complete events. However, this is not always the case.

** (Jason.DecodeError) unexpected end of input at position 18
    (jason 1.4.0) lib/jason.ex:92: Jason.decode!/2
    (elixir 1.15.6) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
    (elixir 1.15.6) lib/enum.ex:1693: Enum."-map/2-lists^map/1-1-"/2
    (openai 0.6.1) lib/openai/stream.ex:57: anonymous fn/1 in OpenAI.Stream.new/1
    (elixir 1.15.6) lib/stream.ex:1626: Stream.do_resource/5
    (elixir 1.15.6) lib/stream.ex:690: Stream.run/1

`mix-test-watch` dependency running in all environments

Describe the feature or improvement you're requesting

Limit the mix-test-watch dependency to dev and test as suggested in official documentation and avoid mix.deps conflict in case the application is using the default config.

Suggestion (source):

# mix.exs
def deps do
  [
    {:mix_test_watch, "~> 1.0", only: [:dev, :test], runtime: false}
  ]
end

Conflict to avoid:

Dependencies have diverged:
* mix_test_watch (Hex package)
  the :only option for dependency mix_test_watch

  > In mix.exs:
    {:mix_test_watch, "~> 1.1", [env: :prod, hex: "mix_test_watch", only: [:dev, :test], runtime: false, repo: "hexpm"]}

  does not match the :only option calculated for

  > In deps/openai/mix.exs:
    {:mix_test_watch, "~> 1.0", [env: :prod, hex: "mix_test_watch", repo: "hexpm", optional: false]}

  Remove the :only restriction from your dep
** (Mix) Can't continue due to errors on dependencies

Additional context

No response

What about replace Hackney with Tesla?

Describe the feature or improvement you're requesting

Tesla is easier to control than Hackney.
(ex. http2, retrying, ...)

What about replace Hackney with Tesla, and set default Tesla adapter as Hackney adapter?

Additional context

No response

Feature: Atomize string keys in stream responses

Describe the feature or improvement you're requesting

Currently when stream: true is set, we're receiving responses with string keys:

%{
  "choices" => [
    %{"delta" => %{"role" => "assistant"}, "finish_reason" => nil, "index" => 0}
  ],
  "created" => 1682700668,
  "id" => "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
  "model" => "gpt-3.5-turbo-0301",
  "object" => "chat.completion.chunk"
}

In line with the standard (non-stream) responses, I'd expect this map to use atom keys i.e.

%{
  choices: [
    %{delta: %{role: "assistant"}, finish_reason: nil, index: 0}
  ],
  created: 1682700668,
  id: "chatcmpl-7ALbIuLju70hXy3jPa3o5VVlrxR6a",
  model: "gpt-3.5-turbo-0301",
  object: "chat.completion.chunk"
}

Additional context

No response

Streaming example from docs doesnt work

Describe the feature or improvement you're requesting

Not sure if doing something wrong but generally the example from docs:

OpenAI.chat_completion([
    model: "gpt-3.5-turbo",
    messages: [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "Who won the world series in 2020?"},
      %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
      %{role: "user", content: "Where was it played?"}
    ],
    stream: true, # set this param to true
  ]
)
|> Stream.each(fn res ->
  IO.inspect(res)
end)
|> Stream.run()

generates an error:

** (CaseClauseError) no case clause matching: {:ok, %HTTPoison.AsyncResponse{id: #Reference<0.2742941706.3081240585.23996>}}
    (openai 0.5.1) lib/openai/client.ex:26: OpenAI.Client.handle_response/1

Additional context

No response

Bug: http_options configuration is not used

I'm using OpenAI chat completion with Stream

in runtime.exs I have the config set as documented:

if config_env() in [:prod, :dev] do
  config :openai,
    # find it at https://platform.openai.com/account/api-keys
    api_key: System.get_env("OPENAI_API_KEY"),
    # find it at https://platform.openai.com/account/org-settings under "Organization ID"
    organization_key: System.get_env("OPENAI_ORG_KEY"),
    # optional, passed to [HTTPoison.Request](https://hexdocs.pm/httpoison/HTTPoison.Request.html) options
    http_options: [recv_timeout: :infinity, stream_to: self(), async: :once]
end

And then running the example from the documentation:

OpenAI.chat_completion([
    model: "gpt-3.5-turbo",
    messages: [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "Who won the world series in 2020?"},
      %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
      %{role: "user", content: "Where was it played?"}
    ],
    stream: true,
  ]
)
|> Stream.each(fn res ->
  IO.inspect(res)
end)
|> Stream.run()

But nothing happens, the process hangs indefinitely, with no inspect output.

When creating the stream with inline config, it works OK:

OpenAI.chat_completion([
    model: "gpt-3.5-turbo",
    messages: [
      %{role: "system", content: "You are a helpful assistant."},
      %{role: "user", content: "Who won the world series in 2020?"},
      %{role: "assistant", content: "The Los Angeles Dodgers won the World Series in 2020."},
      %{role: "user", content: "Where was it played?"}
    ],
    stream: true,
  ],
  %OpenAI.Config{http_options: [recv_timeout: :infinity, stream_to: self(), async: :once]}
)

But I would prefer to not use inline config, and instead use application config as shown in the documentation.

Streaming example does not work in the shell

hi! first, thanks for your work on this 😊

I've gotten the streaming to work in an .exs file (as demonstrated in #36 ), but it doesn't seem to work in a shell (iex -S mix), it just hangs forever.

is there a fundamental reason that has to do with the shell, or am I just missing something?

API key error on prod: You didn't provide an API key. You need to provide your API key in an Authorization header

I'm having this problem on prod. The OpenAI call errors out with:

** (MatchError) no match of right hand side value: {:error, %{"error" => %{"code" => nil, "message" => "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.", "param" => nil, "type" => "invalid_request_error"}}}

Example I run on iex, this works in dev, does not in prod:

OpenAI.chat_completion(
  model: "gpt-3.5-turbo",
  messages: [%{role: "user", content: "Hello how are you?"}]
)

I verified that the ENV keys are set properly on prod as well using echo $OPENAI_API_KEY.

My config.exs looks like this:

config :openai,
  # find it at https://platform.openai.com/account/api-keys
  api_key: System.get_env("OPENAI_API_KEY"),
  # find it at https://platform.openai.com/account/org-settings under "Organization ID"
  organization_key: System.get_env("OPENAI_ORGANIZATION_ID")

Any suggestions?

Make URL target a config option to allow for easier local testing and mocking

Describe the feature or improvement you're requesting

I would like to be able use Bypass, or similar, to write local integrations tests without having to hit the actual OpenAI API. Key to this is the ability to set openai.ex's URL, currently hardcoded as @openai_url within OpenAI.Config.

I would like to propose that the openai_url be overridable by a new api_url config option:

config :openai,
  api_key: "your-api-key",
  organization_key: "your-organization-key",
  api_url: "http://localhost/",
  http_options: [recv_timeout: 2_000] 

This could then be overridden in a test setup block like so:

  setup %{conn: conn} do

    # Setup mock OpenAI server
    bypass = Bypass.open()
    Application.put_env(:openai, :api_url, "http://localhost:#{bypass.port}/")

    # ...

    {:ok, bypass: bypass, conn: conn}
  end

Thoughts?

Additional context

No response

Add compatibility with Azure's OpenAI API Endpoints

Describe the feature or improvement you're requesting

Would you also be willing to have a setting to make this library compatible with Azure's version of OpenAI API endpoints?

This would mirror openai library for Python https://github.com/openai/openai-python#microsoft-azure-endpoints
Azure only uses a subset of the endpoints OpenAI provides with a different request URL.

Here is a link to the Swagger doc for the endpoints for auditing if feasible. I am also willing to help add this feature.
https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json

Additional context

No response

API key per request

Describe the feature or improvement you're requesting

I believe the API key is currently read once from the environment during configuration and then re-used globally. It would be nice to be able to set the API key per request.

Additional context

We have a multi-tenant use case where multiple OpenAI API keys are present and certain requests must use certain keys.

Handle nginx Error

Got an unexpected error when there servers were misconfigured:

** (CaseClauseError) no case clause matching: {:ok, %HTTPoison.Response{status_code: 503, body: {:error, {:unexpected_token, "<html>\r\n<head><title>503 Service Temporarily Unavailable</title></head>\r\n<body>\r\n<center><h1>503 Service Temporarily Unavailable</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>"}},

Would love this would give an :error

HTTPoison error cases aren't handled in Stream.new/1

When handling a streaming request, OpenAI.Client#91 may return %HTTPoison.Error{reason: _, id: nil}, which then causes the following:

** (FunctionClauseError) no function clause matching in anonymous fn/1 in OpenAI.Stream.new/1

Having looked through the OpenAI.Stream module, and to preserve backward compatibility, I propose we handle the error case in the OpenAI.Stream.new/1 resource and return the error as a stream item, similar to the %{"status" => :error} pattern already present when non 200 status codes are received.

Chat Support

Describe the feature or improvement you're requesting

This just dropped:
https://platform.openai.com/docs/guides/chat

Would be wonderful to get support in your library for it. If you don't have time in the near future I will try add it myself. Thx!

Additional context

No response

OpenAI Agents Behaviour

Describe the feature or improvement you're requesting

A lot of what's required for defining an agent is also part of the documentation process for methods. maybe there is a behaviour or something that could be used to define similar or use the existing options like @doc and spec.

I'd be happy to help with this, and could put something together for a more formal proposal as well.

Additional context

No response

Improve JSON decoding strategy

Describe the feature or improvement you're requesting

In the current implementation, the HTTP client is very unsafe and slow: https://github.com/mgallo/openai.ex/blob/main/lib/openai/client.ex#L15

Calling String.to_atom/1 is considered a bad practice and should be avoided in frequent code paths like this, since this creates new atoms in the VM memory, which will never be GCed.

The responses are also unstructured for mostly the same reason.
Not to mention that JSON as a library is horribly slow compared to all the other engines: https://gist.github.com/devinus/f56cff9e5a0aa9de9215cf33212085f6

My suggestions:

  • Replace JSON with Jason or Poison
  • Switch to safe atom creation strategy (pretty easy with Jason/Poison to not have to manually do string conversion to existing atoms)
  • Define core API models as structs, and directly decode to them

Additional context

No response

Update documentation version

Describe the feature or improvement you're requesting

As the title says, we could use newer version of docs.

Additional context

I have ready PR, it's a small thing :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.