Giter VIP home page Giter VIP logo

chatgpt's Introduction

ChatGPT gpt-3.5-turbo API for Free (as a Reverse Proxy)

Welcome to the ChatGPT API Free Reverse Proxy, offering free self-hosted API access to ChatGPT (gpt-3.5-turbo) with OpenAI's familiar structure, so no code changes are needed.

Quick Links

  • Join our Discord Community for support and questions.
    • ⚡Note: Your Discord account must be at least 7 days old to be able join our Discord community.

Table of Contents

Features

  • Streaming Response: The API supports streaming response, so you can get the response as soon as it's available.
  • API Endpoint Compatibility: Full alignment with official OpenAI API endpoints, ensuring hassle-free integration with existing OpenAI libraries.
  • Complimentary Access: No charges for API usage, making advanced AI accessible to everyone even without an API key.

Installing/Self-Hosting Guide

Using Docker

  1. Ensure Docker is installed by referring to the Docker Installation Docs.
  2. Run the following command:
    docker run -dp 3040:3040 pawanosman/chatgpt:latest
  3. Done! You can now connect to your local server's API at:
    http://localhost:3040/v1/chat/completions
    
    Note that the base URL is http://localhost:3040/v1.

Install with chat web interfaces

✅ You can run third-party chat web interfaces, such as BetterChatGPT and LobeChat, with this API using Docker Compose. Click here for the installation guide.

Your PC/Server

To install and run the ChatGPT API Reverse Proxy on your PC/Server by following these steps:

Note: This option is not available to all countries yet. if you are from a country that is not supported, you can use a U.S. VPN or use our hosted API.

  1. Ensure NodeJs (v19+) is installed: Download NodeJs
  2. Clone this repository:
    git clone https://github.com/PawanOsman/ChatGPT.git
  3. Open start.bat (Windows) or start.sh (Linux with bash start.sh command) to install dependencies and launch the server.
  4. Done, you can connect to your local server's API at:
    http://localhost:3040/v1/chat/completions
    
    Note that the base url will be http://localhost:3040/v1

To include installation instructions for Termux on Android devices, you can add the following section right after the instructions for Linux in the Installing/Self-Hosting Guide:

Termux on Android Phones

To install and run the ChatGPT API Reverse Proxy on Android using Termux, follow these steps:

  1. Install Termux from the Play Store.

  2. Update Termux packages:

    apt update
  3. Upgrade Termux packages:

    apt upgrade
  4. Install git, Node.js, and npm:

    apt install -y git nodejs
  5. Clone the repository:

    git clone https://github.com/PawanOsman/ChatGPT.git
  6. Navigate to the cloned directory:

    cd ChatGPT
  7. Start the server with:

    bash start.sh
  8. Your local server will now be running and accessible at:

    http://localhost:3040/v1/chat/completions
    

    Note that the base url will be http://localhost:3040/v1

    You can now use this address to connect to your self-hosted ChatGPT API Reverse Proxy from Android applications/websites that support reverse proxy configurations, on the same device.

Accessing Our Hosted API

Utilize our pre-hosted ChatGPT-like API for free by:

  1. Joining our Discord server.
  2. Obtaining an API key from the #Bot channel with the /key command.
  3. Incorporating the API key into your requests to:
    https://api.pawan.krd/v1/chat/completions
    

Usage Examples

Leverage the same integration code as OpenAI's official libraries by simply adjusting the API key and base URL in your requests. For self-hosted setups, ensure to switch the base URL to your local server's address as mentioned above.

Example Usage with OpenAI Libraries

Python Example

import openai

openai.api_key = 'anything'
openai.base_url = "http://localhost:3040/v1/"

completion = openai.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "user", "content": "How do I list all files in a directory using Python?"},
    ],
)

print(completion.choices[0].message.content)

Node.js Example

import OpenAI from 'openai';

const openai = new OpenAI({
	apiKey: "anything",
	baseURL: "http://localhost:3040/v1",
});

const chatCompletion = await openai.chat.completions.create({
  messages: [{ role: 'user', content: 'Say this is a test' }],
  model: 'gpt-3.5-turbo',
});

console.log(chatCompletion.choices[0].message.content);

License

This project is under the AGPL-3.0 License. Refer to the LICENSE file for detailed information.

chatgpt's People

Contributors

alioth1017 avatar changchiyou avatar cliouo avatar crescent617 avatar fayland avatar k8scat avatar lonelil avatar m00nfly avatar mahdeensky avatar maseshi avatar pawanosman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt's Issues

Project direction: Is this project considering logging?

Great project! I wanted to see where this project is going. I am looking for a proxy server so that I can deploy a smart agent on my website that is publically accessible. I wanted to log sessions in the middleware so I can eventually use LangChain and chat summaries for the end users of my app. Thanks, again.

Use this reverse proxy with another implementation of ChatGPT client.

Based on the description here, I try to use the reverse proxy provided by this project via ChatGPT as follows, but failed:

$ CHATGPT_BASE_URL=https://api.pawan.krd/backend-api python3 -m revChatGPT.V1

        ChatGPT - A command-line interface to OpenAI's ChatGPT (https://chat.openai.com/chat)
        Repo: github.com/acheong08/ChatGPT
        
Type '!help' to show a full list of commands
Press Esc followed by Enter or Alt+Enter to send a message.

You: 
Hello.

Chatbot: 
{"status":false,"error":"Invalid API key","hint":"You can get an API key from https://discord.pawan.krd","info":"https://gist.github.com/PawanOsman/72dddd0a12e5829da664a43fc9b9cf9a","support":"https://discord.pawan.krd"}
Traceback (most recent call last):
  File "/home/werner/Public/repo/github.com/acheong08/ChatGPT.git/src/revChatGPT/V1.py", line 962, in main
    for data in chatbot.ask(prompt):
  File "/home/werner/Public/repo/github.com/acheong08/ChatGPT.git/src/revChatGPT/V1.py", line 449, in ask
    self.__check_response(response)
  File "/home/werner/Public/repo/github.com/acheong08/ChatGPT.git/src/revChatGPT/V1.py", line 58, in wrapper
    out = func(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^
  File "/home/werner/Public/repo/github.com/acheong08/ChatGPT.git/src/revChatGPT/V1.py", line 557, in __check_response
    raise error
revChatGPT.typings.Error: OpenAI: {"status":false,"error":"Invalid API key","hint":"You can get an API key from https://discord.pawan.krd","info":"https://gist.github.com/PawanOsman/72dddd0a12e5829da664a43fc9b9cf9a","support":"https://discord.pawan.krd"} (code: 400)
Please check that the input is correct, or you can resolve this issue by filing an issue
Project URL: https://github.com/acheong08/ChatGPT

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/werner/Public/repo/github.com/acheong08/ChatGPT.git/src/revChatGPT/V1.py", line 986, in <module>
    main(configure())
  File "/home/werner/Public/repo/github.com/acheong08/ChatGPT.git/src/revChatGPT/V1.py", line 58, in wrapper
    out = func(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^
  File "/home/werner/Public/repo/github.com/acheong08/ChatGPT.git/src/revChatGPT/V1.py", line 972, in main
    raise error from e
revChatGPT.typings.CLIError: command line program unknown error
Please check that the input is correct, or you can resolve this issue by filing an issue
Project URL: https://github.com/acheong08/ChatGPT

I wonder how to obtain the corresponding access_token for this alternative reverse proxy to make the configuration file for the above usage. See here for the related discussion.

How to reset and remember conversations

First of all, thank you for providing such a good unofficial API for ChatGPT.

I'm only see that only 1 API is provided, is there any API to continue conversations or reset conversations?

Thank you.

buggy in the chat-gpt-next-web

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
<script src="/cdn-cgi/apps/head/dYMkwc06fJ_hgAqAMinzib-CLyo.js"></script></head>
<body>
<pre>Cannot POST /v1/v1/chat/completions</pre>
<script>(function(){var js = "window['__CF$cv$params']={r:'7c18a4cfefb2c082',m:'TdleDOGxwfwMaYropzInOVm8yAdSnKJtZgU0lwpbsm8-1683118374-0-AdohaiLUtnHFnZlXhU6mfRBcoOt9pAwrxn+c+O2B4kGM',u:'/cdn-cgi/challenge-platform/h/g'};_cpo=document.createElement('script');_cpo.nonce='',_cpo.src='/cdn-cgi/challenge-platform/scripts/invisible.js',document.getElementsByTagName('head')[0].appendChild(_cpo);";var _0xh = document.createElement('iframe');_0xh.height = 1;_0xh.width = 1;_0xh.style.position = 'absolute';_0xh.style.top = 0;_0xh.style.left = 0;_0xh.style.border = 'none';_0xh.style.visibility = 'hidden';document.body.appendChild(_0xh);function handler() {var _0xi = _0xh.contentDocument || _0xh.contentWindow.document;if (_0xi) {var _0xj = _0xi.createElement('script');_0xj.nonce = '';_0xj.innerHTML = js;_0xi.getElementsByTagName('head')[0].appendChild(_0xj);}}if (document.readyState !== 'loading') {handler();} else if (window.addEventListener) {document.addEventListener('DOMContentLoaded', handler);} else {var prev = document.onreadystatechange || function () {};document.onreadystatechange = function (e) {prev(e);if (document.readyState !== 'loading') {document.onreadystatechange = prev;handler();}};}})();</script></body>
</html>

Could not continue the conversation:

Traceback (most recent call last):
File "/home/maminiainalaic/.local/lib/python3.11/site-packages/openai/openai_object.py", line 59, in getattr
return self[k]
~~~~^^^
KeyError: 'choices'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/maminiainalaic/Desktop/apipawn.py", line 38, in
message = completion.choices[0].message['content']
^^^^^^^^^^^^^^^^^^
File "/home/maminiainalaic/.local/lib/python3.11/site-packages/openai/openai_object.py", line 61, in getattr
raise AttributeError(*err.args)
AttributeError: choices

large question problem

If I ask large question problem it always say Hello! I am ChatGPT, a language model developed by OpenAI. How can I assist you today?.Please fix

Self-Host: UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'ip' of undefined

I started the proxy on my machine, but it failed to forward request, and always, the error information is:

UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'ip' of undefined
at rateLimitMiddleware (file:///root/ChatGPT/middlewares.js:13:160)
at Layer.handle [as handle_request] (/root/ChatGPT/node_modules/express/lib/router/layer.js:95:5)
at trim_prefix (/root/ChatGPT/node_modules/express/lib/router/index.js:328:13)
at /root/ChatGPT/node_modules/express/lib/router/index.js:286:9
at Function.process_params (/root/ChatGPT/node_modules/express/lib/router/index.js:346:12)
at next (/root/ChatGPT/node_modules/express/lib/router/index.js:280:10)
at corsMiddleware (file:///root/ChatGPT/middlewares.js:9:5)
at Layer.handle [as handle_request] (/root/ChatGPT/node_modules/express/lib/router/layer.js:95:5)

How can I solve this issue?

CORS Origin issue

Fix cors origin issue.
Access to fetch at 'https://gpt.pawan.krd/api/completions' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status.

When ever I try I'm getting this response,
My entire code is

<body>

	<label for="prompt">Prompt:</label>
	<textarea id="prompt" rows="4"></textarea>
	<button onclick="generateText()">Generate Text</button>
	
	<label for="generated-text">Generated Text:</label>
	<div id="output"></div>
	
	<script>
		function generateText() {
			const prompt = document.getElementById('prompt').value;
			const url = 'https://gpt.pawan.krd/api/completions';
			const payload = {
				"prompt": prompt,
				"temperature": 0.7,
				"max_tokens": 256,
				"top_p": 0.9,
				"frequency_penalty": 0,
				"presence_penalty": 0,
				"model": "text-davinci-003",
				"stop": ""
			}
			const headers = {
			    'Authorization': 'Bearer <API from Discord>',
			    'Content-Type': 'application/json'
			}
			fetch(url, {
				method: 'POST',
				headers: headers,
				body: JSON.stringify(payload)
			})
			.then(response => response.text())
			.then(text => {
				document.getElementById('output').innerText = text;
			})
			.catch(error => console.error(error));
		}
	</script>
</body>

DALL-E

Traceback (most recent call last):
File "/home/maminiainalaic/Desktop/DALL-E.py", line 8, in
response = openai.Image.create(
^^^^^^^^^^^^^^^^^^^^
File "/home/maminiainalaic/.local/lib/python3.11/site-packages/openai/api_resources/image.py", line 36, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/home/maminiainalaic/.local/lib/python3.11/site-packages/openai/api_requestor.py", line 230, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/maminiainalaic/.local/lib/python3.11/site-packages/openai/api_requestor.py", line 624, in _interpret_response
self._interpret_response_line(
File "/home/maminiainalaic/.local/lib/python3.11/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This API is under maintenance, please join our discord server to learn more, https://discord.pawan.krd

self is working?

i m trying to use self host api. return response incorrect api while api is correct.

Proxy support

With the current implementation, it's quite easy to detect that there are multiple calls coming from the same IP but using different API keys, which eventually leads to getting the API keys and related accounts blocked.

It would be great to have some support to rotate between proxies and API key combinations, this makes it harder to correlate the API keys and used IP's, resulting in less accounts blocked.

This tool doesn't work.

I try to test this project using the example here but failed:

(datasci) werner@X10DAi:~$ ipython
Python 3.11.1 (main, Dec 22 2022, 17:06:07) [GCC 12.2.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.7.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import requests
   ...: 
   ...: def get_response(text, lang):
   ...:   params = {'text': text, 'lang': lang}
   ...:   response = requests.get('https://api.pawan.krd/chat/gpt', params=param
   ...: s)
   ...:   return response.json()
   ...: 
   ...: response = get_response('Hello', 'en')
   ...: print(response)
---------------------------------------------------------------------------
JSONDecodeError                           Traceback (most recent call last)
File ~/.pyenv/versions/3.11.1/envs/datasci/lib/python3.11/site-packages/requests/models.py:971, in Response.json(self, **kwargs)
    970 try:
--> 971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError

File ~/.pyenv/versions/3.11.1/lib/python3.11/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    343 if (cls is None and object_hook is None and
    344         parse_int is None and parse_float is None and
    345         parse_constant is None and object_pairs_hook is None and not kw):
--> 346     return _default_decoder.decode(s)
    347 if cls is None:

File ~/.pyenv/versions/3.11.1/lib/python3.11/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
    333 """Return the Python representation of ``s`` (a ``str`` instance
    334 containing a JSON document).
    335 
    336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    338 end = _w(s, end).end()

File ~/.pyenv/versions/3.11.1/lib/python3.11/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
    354 except StopIteration as err:
--> 355     raise JSONDecodeError("Expecting value", s, err.value) from None
    356 return obj, end

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

JSONDecodeError                           Traceback (most recent call last)
Cell In[1], line 8
      5   response = requests.get('https://api.pawan.krd/chat/gpt', params=params)
      6   return response.json()
----> 8 response = get_response('Hello', 'en')
      9 print(response)

Cell In[1], line 6, in get_response(text, lang)
      4 params = {'text': text, 'lang': lang}
      5 response = requests.get('https://api.pawan.krd/chat/gpt', params=params)
----> 6 return response.json()

File ~/.pyenv/versions/3.11.1/envs/datasci/lib/python3.11/site-packages/requests/models.py:975, in Response.json(self, **kwargs)
    971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975     raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Is there any problem with my usage?

I cannot use model "gpt-3.5-turbo"

I send request like:

curl --location 'https://api.pawan.krd/v1/chat/completions' \
--header 'Authorization: Bearer pk-***[OUR_API_KEY]***' \
--header 'Content-Type: application/json' \
--data '{
    "model": "gpt-3.5-turbo",
    "max_tokens": 100,
    "messages": [
        {
            "role": "system",
            "content": "You are an helpful assistant."
        },
        {
            "role": "user",
            "content": "Who are you?"
        }
    ]
}'

I got following response:

{"status":false,"error":{"message":"We couldn't find the model "gpt-3.5-turbo", join our discord server if you have any questions, https://discord.pawan.krd","type":"invalid_request_error"},"support":"https://discord.pawan.k

The Chat Completion API doesn't remember.

The Chat Completion API doesn't remember my previous prompts and its previous responses.

Here is my first JavaScript request code:

fetch('https://api.pawan.krd/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer pk-***[OUR_API_KEY]***',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    'model': 'gpt-3.5-turbo',
    'max_tokens': 100,
    'messages': [
      {
        'role': 'user',
        'content': 'Can I give you a nickname?'
      }
    ],
    'user': 'b17ffc32-e43f-11ed-b5ea-0242ac120002'
  })
});

The AI then responds with:

 Sure, you can give me a nickname.

Here is my follow-up JavaScript request code:

fetch('https://api.pawan.krd/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer pk-***[OUR_API_KEY]***',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    'model': 'gpt-3.5-turbo',
    'max_tokens': 100,
    'messages': [
      {
        'role': 'user',
        'content': 'Your nickname is Xcross'
      }
    ],
    'user': 'b17ffc32-e43f-11ed-b5ea-0242ac120002'
  })
});

The AI then responds with:

 As an AI language model, I do not have a nickname.  However, if you would like to refer to me as Xcross, you are welcome to do so.

I then send:

fetch('https://api.pawan.krd/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer pk-***[OUR_API_KEY]***',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    'model': 'gpt-3.5-turbo',
    'max_tokens': 100,
    'messages': [
      {
        'role': 'user',
        'content': 'What did I say your nickname was again?'
      }
    ],
    'user': 'b17ffc32-e43f-11ed-b5ea-0242ac120002'
  })
});

It responds with:

 As an AI language model, you did not say my nickname as I do not have one.

Am I doing something wrong?

You have already used an API key on this IP address

Hi All,

I am trying to use the hosted API but I always get this message: You have already used an API key on this IP address myIP. I reset the api in discore Bot but again the message appears when I try the api as shown below:

Request:
curl --location 'https://api.pawan.krd/v1/completions'
--header 'Authorization: Bearer pk-myapikey'
--header 'Content-Type: application/json'
--data '{
"model": "text-davinci-003",
"prompt": "Human: Hello\nAI:",
"temperature": 0.7,
"max_tokens": 256,
"stop": [
"Human:",
"AI:"
]
}'

Response:
{
"status": false,
"error": "You have already used an API key on this IP address myip",
"hint": "You can get support from https://discord.pawan.krd",
"info": "https://gist.github.com/PawanOsman/72dddd0a12e5829da664a43fc9b9cf9a",
"support": "https://discord.pawan.krd"
}

Error with attribut created in ChatResponse

From [https://platform.openai.com/docs/api-reference/chat/object]

The attribut created is a integer and represent "The Unix timestamp (in seconds) of when the chat completion was created."

It's clearly not a time in second return by your API . Exemple: 1706937715819 not fit in a integer which cause exception.

LangChain: Unable to parse response from reverse proxy

It seems like LangChain is unable to parse the response returned by the reverse proxy as the json object is missing the id and created fields. Tested with the 2 fields added and it works well. Would it be possible to add in these 2 fields in the json response? The values for these 2 keys can be null.

OpenAI's official API:

{
  "id": "cmpl-7ANU4Oi82ua4aJZJxK88r7AGqpxzc",
  "object": "text_completion",
  "created": 1682707908,
  "model": "text-ada-001",
  "choices": [
    {
      "text": "\n\nYes, this is a test.",
      "index": 0,
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 9,
    "total_tokens": 19
  }
}

Reverse proxy:

{
  "object": "text_completion",
  "model": "text-davinci-003",
  "choices": [
    {
      "text": " This is a test.",
      "index": 0,
      "finish_reason": "stop",
      "logprobs": null
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 5,
    "total_tokens": 15
  }
}

How to host it on vps?

Hello, when i run the code on my vps and put the ip of my vps inside my gpt code it wont work, it will only work if i run my gpt code in my vps, how can i host this repo inside my vps and make it work with my gpt code?

about ip limitation

Every time the mobile internet is reconnected in Turkey and the modem restarts on cable internet, the ip address changes. That's why we need to get the key again.

Is it possible to change the limitation of the IP address with another alternative limitation method?

Reverse chatGPT error

{ "status": false, "error": { "message": "Please wait a few minutes and try again!", "type": "api_not_ready_or_request_error" } }

Is this really chatgpt?

I tried this but its giving me weird result that chatgpt would no way going to say.

Code

Prompt: are u chatgpt?

const response = await ai.createCompletion({
  model: 'text-davinci-003',
  prompt: 'are u chatgpt?',
  temperature: 0.7,
        max_tokens: 256,
        top_p: 1,
        frequency_penalty: 0,
        presence_penalty: 0
})

Results

This repo:

ChatGPT is a chatbot platform developed by OpenAI. It uses natural language processing (NLP) to create automated conversations with users, allowing them to ask questions and get answers in real-time.

Official Chatgpt:

Yes, I am ChatGPT, a large language model trained by OpenAI, based on the GPT-3.5 architecture. How can I assist you today?

Garbage in responses

Sometimes, strange garbage appears in server responses. I wrote a function to remove it, but it doesn't work very well. Maybe it can be fixed on the server side. With the original GPTchat, there is no such problem.

Garbage in the server response - two symbols �� instead of a letter in the word.

использовать -> испол��зовать

request

    completion = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=messages,
        max_tokens=max_tok,
        temperature=temp,
        timeout=timeou
    )
    
    response = completion.choices[0].message.content

bad respond


Kir: Obsidian и Notion - это два разных инструмента, каждый со своими преимуществами и недостатками. Однако, вы можете использовать их совместно, чтобы получить максимальную выгоду от обоих инструментов.

Вот несколько способов, которыми вы можете использовать Obsidian и Notion вместе:

  1. Хран��те свои заметки в Obsidian, а документацию и проекты в Notion. Вы можете использовать Obsidian для хранения своих личных заметок, идей и мыслей, а Notion - для хранения документации по проектам, задачам и другой информации, которую вы хотите поделиться со своей командой.

  2. Используйте Notion для создания базы знаний, а Obsidian - для ее расширения. Вы можете использовать Notion для создания базы знаний, а затем испол��зовать Obsidian для расширения ее функционала, добавления новых заметок и связей между ними.

  3. Используйте Notion для управления задачами, а Obsidian - для создания связей между ними. Вы можете использовать Notion для управления своими задачами и планами, а затем использовать Obsidian для создания связей между ними и другой информацией, которая может помочь вам выполнить задачи более эффективно.


function to fix

def check_and_fix_text(text):
    """Trying to fix the strange feature of the that GPT server, which often makes a mistake in a word and inserts 2 question marks instead of a letter"""

    ru = enchant.Dict("ru_RU")

    # Removing everything from the text except for Russian letters, replacing 2 strange characters with 1 to simplify the regex
    text = text.replace('��', '⁂')
    russian_letters = re.compile('[^⁂а-яА-ЯёЁ\s]')
    text2 = russian_letters.sub(' ', text)

    words = text2.split()
    for word in words:
        if '⁂' in word:
            suggestions = ru.suggest(word)
            if len(suggestions) > 0:
                text = text.replace(word, suggestions[0])

   # If it was not possible to find a suitable word from the dictionary, the character is simply removed. It's better to have a typo than garbage.
    return text.replace('⁂', '')

Incorrect API key provided

import openai

openai.api_key = my_key
openai.api_base = 'https://api.pawan.krd/v1'


response = openai.completions.create(
  model="text-davinci-003",
  prompt="Human: Hello\nAI:",
  temperature=0.7,
  max_tokens=256,
  top_p=1,
  frequency_penalty=0,
  presence_penalty=0,
  stop=["Human: ", "AI: "]
)

print(response.choices[0].text)

openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: pk-MNLTc***************************************CqBD. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

The same error, when I use openai.api_base = 'https://api.pawan.krd/pai-001-light-beta/v1'

BTW you should update your README because openai.Completion is no longer support with openai higher than 0.28

Vercel Flask: Returning what seems to be a website but not quite

When I run on local machine, everything works fine but when I host on vercel, it gives me this error:

0zYMVkVnH2lQHf6kQuJS9ZPoEWyGzk3Hx1ZUUV712CXEmpqVUf3u8xJ2MsqXmUP31dQSOXSzaatdesLhddpOSFD1YF7-Ic3gyxDtNL1v8wEsgAYiBq9HbTrA0u2ZFxorSrvPkqh9M8IJaXfRUcc9FH4ccKml70HDQLB9KRqHGpXwdtMAKX0WeyakCr8JEql05lCHaSwLZkrbf1Cgg9q1dk7vB59L-RouIDl-SasfjdUR_ZNQHEVMzZIkVfRXmvk3SDP6j8Vn2QUjpLIhAqRippV5njWMFROHEpV2Rb_YxzZyq7SwssRFBs6QTJNerOBy6s5Aw44s0GdgkIyWpwT77Kpx59-Z6PQTes1cUj__xM-W9SIw03-M14JQw5003b4AQLT3Gouuk82WpwT5uC0VsnKvHXUOhZNRXtXW0MVuX5kpTiIird4TuIDwKysj7-fbBIY83wSiZCowbjsvqSeyA2IXPb9V5z9cFzBhKtwf5quzdPfsXTyTVabt8HMWKt50fUonI_M4tLndrH23Ww0vPx8ueWYG_CzldJDiLl1uSoO9IdHO0w-J1HknLI4k2YOMlz_he6EuCNjhF-03i_kBjK0KNX90cDQcUdetPVBb33GiHCtGPAUc6T8LXDQlLyJNo21lav_Uw_EZbip6zjLWvd9HdKpzfOIYcjGzqTXGfv0CDDkvCyHIOQy6tACG2Mbk0oBdzXItZlu-Iw699qYyZplRw3tOQhnArrpQea44F3KLN4syROlCsnwBuTD4DWsSU8v2SR">
        </form>
    </div>
</div>
<script>
    (function(){
        window._cf_chl_opt={
            cvId: '2',
            cZone: 'api.pawan.krd',
            cType: 'managed',
            cNounce: '6099',
            cRay: '7ba4cec11a5e81df',
            cHash: 'de5f89c15e550af',
            cUPMDTk: "\/v1\/chat\/completions?__cf_chl_tk=.Sl0VSiGXrsGOZMQ03UalG7ZGd4pGp0yOzZALNgwyU0-1681903744-0-gaNycGzNDCU",
            cFPWv: 'b',
            cTTimeMs: '1000',
            cMTimeMs: '0',
            cTplV: 5,
            cTplB: 'cf',
            cK: "",
            cRq: {
                ru: 'aHR0cHM6Ly9hcGkucGF3YW4ua3JkL3YxL2NoYXQvY29tcGxldGlvbnM=',
                ra: 'T3BlbkFJL3YxIFB5dGhvbkJpbmRpbmdzLzAuMjcuNA==',
                rm: 'UE9TVA==',
                d: 'ARVxa9Ipodt9xb5qCvWHq3reeclxXu+/Rzen2IbcqChH/CBfJ9aNW2jAgg1uQIxHoXT+gCyxgOHb7o0yDlzFa5HSAjncDDfIr6MiLtsjn87zNd+aL80gIFcIDQxcTWD/ndik0UgsYLbd3X415zXryW8bAwsiUspcEGJnmG5YdekLPxVQdL2d4RvQ1dKwWEGYoBNnlzFqVoDfQogeaYVaBTYz6SncEMMgcX9G6paje4F76rNkjQU8jO+enHtQhjbC2NQEsI8+O8g6HChiOAnuwTwxMqtf1rINfisEwWtxW21e2B9Hhvl5nvueeT23qYf1WN94HY2dktyVdOlNnhvgwhmcV7spC5Uf7g8TGhz4k1yBaZfkCzCmND/BmZtgthJCDruxEtQm5/E6aSMqYJgODWfQX8tlTgyPUtkzIkemN3ME5q8VFklT7GN2KJf9PRCcr8LozmrkiTa1CIc2qg0XfSSzmRer0sxPf+ruZ63P0dAS8BGlAlYwcs6+gpGqcGurZSHpwWN+jNWmuVaSFumQFVVYWmrQIYpFXvQtiRqVHFT0rJl28TqFQwRmuQTY5Ha0',
                t: 'MTY4MTkwMzc0NC4xODAwMDA=',
                m: 'm1k/hLri2RO6jB0l3yAucEEDdRsovsh7gfGF6GD8/0E=',
                i1: 'QJnLReyGXQL8UTLQE2SY/w==',
                i2: 'xTsEeY8XEquCzScXTBD0yw==',
                zh: 'zZ3AeV7xP77LO8Ln8FNsB21TQiZhu5GG0RFrp7R/Cmw=',
                uh: 'FBQ1H9QDQTncwlqokqQjvIq6Klr88Rm74Ic+061kzcc=',
                hh: 'm3vKriM0iUeqvcDJYjmPuwBe01Dq6K2oBTw+OWRrQZ4=',
            }
        };
        var trkjs = document.createElement('img');
        trkjs.setAttribute('src', '/cdn-cgi/images/trace/managed/js/transparent.gif?ray=7ba4cec11a5e81df');
        trkjs.setAttribute('alt', '');
        trkjs.setAttribute('style', 'display: none');
        document.body.appendChild(trkjs);
        var cpo = document.createElement('script');
        cpo.src = '/cdn-cgi/challenge-platform/h/b/orchestrate/managed/v1?ray=7ba4cec11a5e81df';
        window._cf_chl_opt.cOgUHash = location.hash === '' && location.href.indexOf('#') !== -1 ? '#' : location.hash;
        window._cf_chl_opt.cOgUQuery = location.search === '' && location.href.slice(0, location.href.length - window._cf_chl_opt.cOgUHash.length).indexOf('?') !== -1 ? '?' : location.search;
        if (window.history && window.history.replaceState) {
            var ogU = location.pathname + window._cf_chl_opt.cOgUQuery + window._cf_chl_opt.cOgUHash;
            history.replaceState(null, null, "\/v1\/chat\/completions?__cf_chl_rt_tk=.Sl0VSiGXrsGOZMQ03UalG7ZGd4pGp0yOzZALNgwyU0-1681903744-0-gaNycGzNDCU" + window._cf_chl_opt.cOgUHash);
            cpo.onload = function() {
                history.replaceState(null, null, ogU);
            };
        }
        document.getElementsByTagName('head')[0].appendChild(cpo);
    }());
</script>


</body>
</html>
)

I am using the Python OpenAI method instead of making requests. I've tried using requests but it says no prompt was provided every time I use it EVEN if I gave it a prompt.

How is this free?

Is it a community maintained API or something like? If yes, what if it reach
too high costs? You will limit the number of API keys or something?

Issue with java okhhtp

public class Main {
    public static void main(String[] args) {
        OkHttpClient client = new OkHttpClient();

        Request request = new Request.Builder()
                .url("https://api.pawan.krd/chat/gpt?text=hello")
                .build();

        try {
            Response response = client.newCall(request).execute();
            String responseBody = response.body().string();
            System.out.println(responseBody);
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

Responce:

<html>
<head><title>502 Bad Gateway</title><script src="/cdn-cgi/apps/head/dYMkwc06fJ_hgAqAMinzib-CLyo.js"></script></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>openresty</center>
<script>(function(){var js = "window['__CF$cv$params']={r:'79b820001f4836dd',m:'I9LtAGxe2vQftngWibqkAOSaJkWdl9.GKH.HkDRaync-1676737592-0-ARvKa/MkfLXgDd/u5qccaUo9pLoTb0rH1ejUHlAI3Ftc+5kjDtQzjyyBLx1sBMXDhBueFWEQ+13PXuhNCclRQg01q/7sZ/IHMMAmq28u17Opb+5RNWt59VZo7qva9x3iQWjxbG+nni9Oc4oWIeuYCs0=',s:[0x790cdb083e,0x124aa35753],u:'/cdn-cgi/challenge-platform/h/g'};var now=Date.now()/1000,offset=14400,ts=''+(Math.floor(now)-Math.floor(now%offset)),_cpo=document.createElement('script');_cpo.nonce='',_cpo.src='/cdn-cgi/challenge-platform/h/g/scripts/alpha/invisible.js?ts='+ts,document.getElementsByTagName('head')[0].appendChild(_cpo);";var _0xh = document.createElement('iframe');_0xh.height = 1;_0xh.width = 1;_0xh.style.position = 'absolute';_0xh.style.top = 0;_0xh.style.left = 0;_0xh.style.border = 'none';_0xh.style.visibility = 'hidden';document.body.appendChild(_0xh);function handler() {var _0xi = _0xh.contentDocument || _0xh.contentWindow.document;if (_0xi) {var _0xj = _0xi.createElement('script');_0xj.nonce = '';_0xj.innerHTML = js;_0xi.getElementsByTagName('head')[0].appendChild(_0xj);}}if (document.readyState !== 'loading') {handler();} else if (window.addEventListener) {document.addEventListener('DOMContentLoaded', handler);} else {var prev = document.onreadystatechange || function () {};document.onreadystatechange = function (e) {prev(e);if (document.readyState !== 'loading') {document.onreadystatechange = prev;handler();}};}})();</script></body>
</html>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.