Giter VIP home page Giter VIP logo

openai-quickstart-node's Introduction

OpenAI API Quickstart - Node.js example app

This is an example chat app intended to get you started with your first OpenAI API project. It uses the Chat Completions API to create a simple general purpose chat app with streaming.

Basic request

To send your first API request with the OpenAI Node SDK, make sure you have the right dependencies installed and then run the following code:

import OpenAI from "openai";

const openai = new OpenAI();

async function main() {
  const completion = await openai.chat.completions.create({
    messages: [{ role: "system", content: "You are a helpful assistant." }],
    model: "gpt-3.5-turbo",
  });

  console.log(completion.choices[0]);
}

main();

This quickstart app builds on top of the example code above, with streaming and a UI to visualize messages.

Setup

  1. If you don’t have Node.js installed, install it from nodejs.org (Node.js version >= 16.0.0 required)

  2. Clone this repository

  3. Navigate into the project directory

    $ cd openai-quickstart-node
  4. Install the requirements

    $ npm install
  5. Make a copy of the example environment variables file

    On Linux systems:

    $ cp .env.example .env

    On Windows:

    $ copy .env.example .env
  6. Add your API key to the newly created .env file

  7. Run the app

    $ npm run dev

You should now be able to access the app at http://localhost:3000! For the full context behind this example app, check out the tutorial.

openai-quickstart-node's People

Contributors

abshek17 avatar deadex-ng avatar dependabot[bot] avatar jeevnayak avatar logankilpatrick avatar pathree avatar schnerd avatar shiba-hiro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai-quickstart-node's Issues

Error occured after running npm run dev

Describe the bug

`root@chat:/opt/chatGPT/openai-quickstart-node# npm run dev

[email protected] dev /opt/chatGPT/openai-quickstart-node > next dev
/opt/chatGPT/openai-quickstart-node/node_modules/next/dist/cli/next-dev.js:309
showAll: args["--show-all"] ?? false, ^
SyntaxError: Unexpected token ?
at Module._compile (internal/modules/cjs/loader.js:723:23)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
at Module.load (internal/modules/cjs/loader.js:653:32)
at tryModuleLoad (internal/modules/cjs/loader.js:593:12) at Function.Module._load (internal/modules/cjs/loader.js:585:3)
at Module.require (internal/modules/cjs/loader.js:692:17)
at require (internal/modules/cjs/helpers.js:25:18) at Object.dev (/opt/chatGPT/openai-quickstart-node/node_modules/next/dist/lib/commands.js:10:30)
at Object. (/opt/chatGPT/openai-quickstart-node/node_modules/next/dist/bin/next:141:28) at Module._compile (internal/modules/cjs/loader.js:778:30) npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] dev: next dev npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] dev script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2023-01-14T19_43_30_894Z-debug.log`

To Reproduce

Enter command: npm run dev

See error above.

OS

Ubuntu 22.04

Node version

v10.19.0

Tried querying the "text-davinci-003" model via an API key today and got this error

Describe the bug

APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/engines/text-davinci-003/completions (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14ea861b14e0>: Failed to establish a new connection: [Errno -2] Name or service not known'))

To Reproduce

import openai

Use the API key for authentication

openai.api_key = "sk-XXXXXXXXXXXXXXXX"

Define the model to use

model_engine = "text-davinci-003"

Define the prompt to use as input

prompt = "Are all Indians into tech?"

Request a completion from the model

completions = openai.Completion.create(
engine=model_engine,
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
temperature=0.5,
)

Get the first response from the completions

message = completions.choices[0].text

Print the response

print(message)

OS

No response

Node version

No response

OpenAI API Key Security in Next.js

Describe the bug

OpenAI's official documentation explains API keys as follows:

"Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service."

Does the API Route provided in the current example for Next.js correspond to the own backend server?
If we put the API key in an environment variable and use Vercel for deployment, would there be no security issues?

Thanks in advance :)

To Reproduce

Status quo of the provided example

OS

No response

Node version

No response

Error: Request failed with status code 429

Hi everyone!
I'm trying to run the app locally but getting this error with status code 429, which is corresponds to "too many requests".

npm run dev

> [email protected] dev
> next dev

ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info  - Loaded env from /Users/tesnik/Desktop/Workspace/FullStack/openai-quickstart-node/.env
wait  - compiling...
event - compiled client and server successfully in 167 ms (113 modules)
Browserslist: caniuse-lite is outdated. Please run:
  npx browserslist@latest --update-db
  Why you should do it regularly: https://github.com/browserslist/browserslist#browsers-data-updating
wait  - compiling / (client and server)...
wait  - compiling...
event - compiled client and server successfully in 70 ms (132 modules)
wait  - compiling /_error (client and server)...
wait  - compiling...
event - compiled client and server successfully in 42 ms (133 modules)
wait  - compiling /api/generate...
wait  - compiling...
event - compiled client and server successfully in 114 ms (143 modules)
error - Error: Request failed with status code 429
error - Error: Request failed with status code 429
error - Error: Request failed with status code 429
error - Error: Request failed with status code 429
error - Error: Request failed with status code 429
error - Error: Request failed with status code 429

Does this mean that I ran out of free quota or what?
Appreciate any help, thanks!

npm run dev not working

When I reopen the project and try running the "npm run dev command", the following is displayed:

ready - started server on 0.0.0.0:3000, url: http://localhost:3000/
info - Loaded env from C:\Users...\openai-quickstart-node-master.env

However, when I click on the local host link, the web page doesn't load. Does anyone know why?

Rewrite in TypeScript

Describe the feature or improvement you're requesting

It's 2023.

Additional context

I apologise for my bluntness, but please don't let that detract from the fact that it is, indeed, 2023.

Internal Server Error

I am running the project locally but whenever I tried clicking "Generate names" but it shows this error. Anyone knows how to fix this?
image

429 Error

Describe the bug

Hi, I tried to follow the steps described in the readme to run the example app, but I get a 429 error as shown below. The API key I generated and put into the .env file doesn't have any usage, so I thought the request should go through. Is it just that the OpenAI API is at capacity, or is there anything further I should try to do to resolve the issue? Thanks for the help!

Error message below:

[~/code/app/openai-quickstart-node]$ npm run dev

> [email protected] dev
> next dev

ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info  - Loaded env from /Users/henry/Documents/code/app/openai-quickstart-node/.env
wait  - compiling...
event - compiled client and server successfully in 2.3s (113 modules)
Browserslist: caniuse-lite is outdated. Please run:
  npx browserslist@latest --update-db
  Why you should do it regularly: https://github.com/browserslist/browserslist#browsers-data-updating
wait  - compiling / (client and server)...
wait  - compiling...
event - compiled client and server successfully in 2.4s (132 modules)
wait  - compiling /api/generate...
wait  - compiling...
event - compiled client and server successfully in 536 ms (142 modules)
wait  - compiling /_error (client and server)...
wait  - compiling...
error - Error: Request failed with status code 429
event - compiled client and server successfully in 101 ms (143 modules)
error - Error: Request failed with status code 429
error - Error: Request failed with status code 429

To Reproduce

Follow the 7 steps shown in the readme below, and tried to generate names on http://localhost:3000/
https://github.com/openai/openai-quickstart-node

OS

macOS

Node version

Node.js v18.13.0

npm run dev successfully but responsed 500: message: "An error occurred during your request."}

Describe the bug

After clicking Generate names button, http response shows:
{"error":{"message":"An error occurred during your request."}},
and terminal shows:
Error with OpenAI API request: connect ETIMEDOUT 199.59.148.206:443

I can use https://chat.openai.com/chat by proxy in China, so it should not be a proxy problem I think.

To Reproduce

  1. run npm dev successfully.
  2. enter an animal.
  3. click the button
  4. show the error message: {"error":{"message":"An error occurred during your request."}}

OS

No response

Node version

v18.12.1

NOT able to run it.

Describe the bug

I just download the demo, switch the apiKey to my own, and run it.
When I click btn "generate names", the info below shows in my terminal.

400

<title>400 The plain HTTP request was sent to HTTPS port</title>

400 Bad Request

The plain HTTP request was sent to HTTPS port
nginx

image

image

To Reproduce

  1. gh repo clone openai/openai-quickstart-node
  2. switch the apiKey.
  3. npm install & npm run dev
  4. input and click the btn.

OS

No response

Node version

No response

Change the http port number

Describe the feature or improvement you're requesting

Sometimes it is neccesary to change the http port number. How can this be done, can this be somehow configurable?

Additional context

Nevermind... can be changed in package.json, sorry
No response

Current Error Handling causing new error for timeouts

Describe the bug

Community post of affected users here: https://community.openai.com/t/api-504s-in-production-vercel-only/28795

Vercel users are currently experiencing a timeout issue with OpenAI's API, however, it is not clear the issue is a timeout due to the way error handling has been setup in the quickstart project.

Currently the sample project's fetch completion handler is trying to interpret a json response from an error string. This leads to an unclear error.
Uncaught (in promise) SyntaxError: Unexpected token 'A', "An error o"... is not valid JSON

The error handling would be more reliable if we reported on the status rather than assuming we will always get a json response. I created a pull request for it here #46 (comment)

To Reproduce

  1. fork openai-quickstart-node
  2. Login into vercel via github and deploy the project (should take just a few minutes)
  3. In the now deployed environment for openai-quickstart-node, start making large queries to the API
  4. You should start seeing timeouts cause failures but with a weird message like SyntaxError: Unexpected token ...

Solution

Pull request here #46 (comment)

OS

No response

Node version

No response

The readme Docs should contain a section for the contributors

Describe the feature or improvement you're requesting

Description

OpenAI is one of the most significant companies in the world. SO every contributor contributing to the OpenAI-Quickstart website has a dream in mind that the contributor's GitHub profile gets visible to the people coming to this repo.

What is the problem?

So, the current Readme documentation file doesn't contain any section where we can directly see the peoples who contributed to the repo. and because of that a lot of people are losing opportunities.

How can we address it?

So, I will write an MD to add a section and add all the GitHub usernames who have contributed to the existing repo. And I will make sure this contributing peoples section is after everything or its in the last section of readme documentation

Why is it important?

It important because:-

  • The contributors would be much happy if they see their profile on the home page of repo. and it will motivate new contributors to contribute to the repo
  • it will be really very good for the company as well because they are taking care of contributors.

Who needs this?

Everyone using this

When should this happen (use version numbers if needed)?
Once this issue sounds helpful to the viewer.

So if the issue sounds helpful, do assign me for this issue and I will be really very happy to contribute to it ;).

Additional context

No response

Server error when deploy to Heroku

Hi,

I wanted to deploy the example on a Heroku server but got an error. Any ideas?

Server Error TypeError: (0 , react_jsx_dev_runtime__WEBPACK_IMPORTED_MODULE_0__.jsxDEV) is not a function

How to add proxy config in the request?

Describe the feature or improvement you're requesting

I access api.openai.com via proxy.
When I try to send request by submitting form in the project, application run out with error:

requests.exceptions.ProxyError: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/completions (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error')))

I tried the URL in web browser with proxy, and got 200 response.

I read source code but did not found anywhere to add proxy config, how can I do that? Please.

Additional context

No response

connect ETIMEDOUT 31.13.70.33:443

Describe the bug

When I npm run dev and request the interface, this error is reported, and I can't ping 31.13.94.36, how should I solve this problem?

To Reproduce

1.npm install
2.npm run dev
3.connect ETIMEDOUT 31.13.70.33:443

OS

No response

Node version

No response

crypto hash error in Node.js v17.7.1

Getting this error on start with node 17. Works fine in 16. Thanks for the quickstart! :)

ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Loaded env from /Users/location0/openai-quickstart-node/.env
info - Using webpack 5. Reason: Enabled by default https://nextjs.org/docs/messages/webpack5
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:135:10)
at BulkUpdateDecorator.hashFactory (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:138971:18)
at BulkUpdateDecorator.update (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:138872:50)
at OriginalSource.updateHash (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack-sources3/index.js:1:10264)
at NormalModule._initBuildHash (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68468:17)
at handleParseResult (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68534:10)
at /Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68628:4
at processResult (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68343:11)
at /Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68407:5
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:135:10)
at BulkUpdateDecorator.hashFactory (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:138971:18)
at BulkUpdateDecorator.update (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:138872:50)
at OriginalSource.updateHash (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack-sources3/index.js:1:10264)
at NormalModule._initBuildHash (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68468:17)
at handleParseResult (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68534:10)
at /Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68628:4
at processResult (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68343:11)
at /Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68407:5
node:internal/crypto/hash:67
this[kHandle] = new _Hash(algorithm, xofLen);
^

Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:135:10)
at BulkUpdateDecorator.hashFactory (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:138971:18)
at BulkUpdateDecorator.update (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:138872:50)
at OriginalSource.updateHash (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack-sources3/index.js:1:10264)
at NormalModule._initBuildHash (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68468:17)
at handleParseResult (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68534:10)
at /Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68628:4
at processResult (/Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68343:11)
at /Users/location0/openai-quickstart-node/node_modules/next/dist/compiled/webpack/bundle5.js:68407:5 {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}

Node.js v17.7.1

I updated the code to use gpt-3.5-turbo model on my fork -- feel free to copy/paste

Describe the feature or improvement you're requesting

I updated the code to use GPT-3.5-turbo. Feel free to copy my code from my fork @ https://github.com/CaseySMiller/openai-quickstart-node

I also wasn't able to get .env variables from ./.env so changed it to ./.env.local

I updated the readme to reflect this.

Additional context

  • Updated the .env.example file name
  • Updated readme to reflect .env change
  • Changed to newest version of openai library in package.json
  • Modified ./pages/api/generate.js to call createChatCompletion instead of the createCompletion method and pass it a correctly formatted object
  • Happy copy/pasting ;)

Returns status code 500

Describe the bug

Using the node.js sample exaclty as-is with a good key, this line executes due to a status code 500:

message: "OpenAI API key not configured, please follow instructions in README.md",

I just created a new key and added it to the .env file like this:

OPENAI_API_KEY=k-5ybezAX9xxxxx_mykey

To Reproduce

See above

OS

Windows 11

Node version

Node v16.16.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.