Giter VIP home page Giter VIP logo

vscode-chatgpt-reborn's Introduction

VSCode Reborn AI

Write, refactor, and improve your code in VSCode using ai. With VSCode Reborn AI, you decide what AI you want to use.

Code offline with ai using a local LLM.

Enhanced support for: OpenRouter.ai (API), and ollama (local).

Get for VSCode

Search for "VSCode Reborn AI" in the VSCode extension search.

Or install directly:

Or build this extension yourself (scroll further down).

Local LLMs and Proxies

Any tool that is "compatible" with the OpenAI API should work with this extension. The tools listed below are the ones I've personally tested.

Local LLMs tested to work with this extension

Alternative APIs tested to work with this extension

Proxies

I've set up a proxy for anyone that needs it at https://openai-proxy.dev/v1. It's running x-dr/chatgptProxyAPI code on CloudFlare Workers. This is mainly for anyone who wants to use OpenAI, but cannot due to api.openai.com being blocked.

Internationalization

Translated to: ๐Ÿ‡ฉ๐Ÿ‡ช ๐Ÿ‡ช๐Ÿ‡ธ ๐Ÿ‡ซ๐Ÿ‡ท ๐Ÿ‡ฎ๐Ÿ‡น ๐Ÿ‡ฏ๐Ÿ‡ต ๐Ÿ‡ฐ๐Ÿ‡ท ๐Ÿ‡ณ๐Ÿ‡ฑ ๐Ÿ‡ต๐Ÿ‡ฑ ๐Ÿ‡ต๐Ÿ‡น ๐Ÿ‡น๐Ÿ‡ท ๐Ÿ‡บ๐Ÿ‡ฆ ๐Ÿ‡จ๐Ÿ‡ณ ๐Ÿ‡น๐Ÿ‡ผ

Most of this extension has been translated to about a dozen languages. The translations are not perfect and may not be correct in some places. If you'd like to help with translations, please see the i18n discussion.

Development

Clone this repo

git clone https://github.com/vscode-chatgpt-reborn/vscode-chatgpt-reborn.git

Setup

yarn

Build the extension

yarn run build

Test new features in VS Code

To test the vscode-chatgpt-reborn extension in VS Code, follow these steps:

  1. Open the project directory in Visual Studio Code.

  2. Press F5 or click Run > Start Debugging in the menu to start a new Extension Development Host instance with the extension loaded.

  3. In the Extension Development Host instance, test the extension's functionality.

  4. Use the Debug Console in the main Visual Studio Code window to view any output or errors.

  5. To make changes to the extension, update the code, vscode will automatically be running the yarn run watch script. But, for testing you'll need to reload the extension, do that by pressing Ctrl + Shift + F5/Cmd + Shift + F5 (or click Run > Restart Debugging).

Package for VS Code

yarn run package # Runs `vsce package`

Changelog

See the CHANGELOG for a list of past updates, and upcoming unreleased features.

Tech

Yarn - TypeScript - VSCode Extension API - React - Redux - React Router - Tailwind CSS

  • This extension has a custom UI with React + TailwindCSS, but theme support and remaining consistnet with VSCode's UI components is still a priority.

License

This project is licensed under the ISC License - see the LICENSE file for details.

vscode-chatgpt-reborn's People

Contributors

christopher-hayes avatar dependabot[bot] avatar gencay avatar nickv2002 avatar peterdavehello avatar zsgsdesign avatar zzy-life avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

vscode-chatgpt-reborn's Issues

Feature request: Configure system message for chat context.

Describe the feature

Allow the user to configure the system message that is sent to ChatGPT to set the chat context.

  • Right now this is hard-coded, so this can be converted to a extension setting that can be configured.

This part needs to be modified: https://github.com/Christopher-Hayes/vscode-chatgpt-reborn/blob/9851608e76859d15e1170edfce7899b9ca07b54d/src/chatgpt-view-provider.ts#L305

Details

gencay/vscode-chatgpt#188
gencay/vscode-chatgpt#228

Auto Scroll don't work some times

Describe the Bug

With the latest 3.11.2, I still find that the auto scroll to the bottom won't work in some cases.
gencay/vscode-chatgpt#212

Please tell us if you have customized any of the extension settings or whether you are using the defaults.

All the settings are left as default, but I found one setting Stream Throttling (default 100) which seems related to the UI performance. Is changing this value help the issue of auto-scroll failing to catch up with the streaming response?

Additional context

As can see in the relative position of the scroll bar and the response in the following snapshot. The response still keeps generating but the scroll is not moving even after the generation finish.

ๆˆชๅœ– 2023-03-25 ไธ‹ๅˆ11 06 10

Feature Request: Add option to use content of open editor tabs

Discussed in #29

Originally posted by chriamue April 21, 2023
It would be great to have an option that allows us to use the content of all open editor tabs, similar to the existing "Use Editor Selection" feature.

This would provide users with the flexibility to include relevant information from multiple tabs when making API calls, resulting in better context-aware suggestions and responses from ChatGPT. In addition, it could save time and effort, as users would no longer need to manually copy and paste the content from various tabs to include in the API call.

Here is a draft commit:
develop...chriamue:vscode-chatgpt-reborn:feature/files-as-context

Set `chatgpt.gpt3.maxTokens` = `4096` with `gpt-3.5-turbo` will always recive `400 Bad Request`

Describe the Bug

According to the document: https://platform.openai.com/docs/models/gpt-3-5, Max tokens of gpt-3.5-turbo is 4096, however, when I set chatgpt.gpt3.maxTokens to 4096, no matter how short my code is, I got 400 Bad Request.

Please tell us if you have customized any of the extension settings or whether you are using the defaults.

Using OpenAI API Key with model gpt-3.5-turbo, and customized maxTokens to 4096

Additional context

I tried to set it to from 3800 to 4000, the problem is still there, but 3750 starts to look good to me right now, in some cases, it need to be lower, but the code is actually too short to make the token not enough(only a few lines, for testing purpose), that's very interesting.

Page Experience

Describe the feature

The current UI layout and code prompts highlight themes that are difficult to accept, preferring the original version's UI style. Currently, it seems that the layout is cluttered and cannot quickly focus on important content areas. I hope to do some balancing on the UI

GPT-4 Turbo (gpt-4-1106-preview) model not included in ChatGPT Reborn VSCode extension

Describe the feature

Any plan to add support for GPT-4 Turbo, aka gpt-4-1106-preview?

The latest GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This preview model is not yet suited for production traffic. It has a context window of 128,000 tokens.

See: https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo

Can Azure OpenAI Endpoint Work?

Describe the feature

Given the current turmoil at OpenAI, can we change the Api Base Url in settings to Azure endpoint and will it work? Will like to get a confirmation before creating an Azure account.

'Unexpected end of JSON input' with custom apiBaseUrl

Describe the Bug

Hi

I have replaced default chatgpt.gpt3.apiBaseUrl with the proxy API URL in settings.json

"chatgpt.gpt3.apiBaseUrl": "https://openai.1rmb.tk/api/v1",

return error message : Unexpected end of JSON input

I have tested the proxy API with curl command it works ok :

โฏ curl -sS https://chatai.1rmb.tk/api/v1/chat/completions
{
    "error": {
        "message": "You didn't provide an API key. You need to provide your API key in an Authorization header using Bearer auth (i.e. Authorization: Bearer YOUR_KEY), or as the password field (with blank username) if you're accessing the API from your browser and are prompted for a username and password. You can obtain an API key from https://platform.openai.com/account/api-keys.",
        "type": "invalid_request_error",
        "param": null,
        "code": null
    }
}

Where are you running VSCode? (Optional)

None

Which OpenAI model are you using? (Optional)

None

Additional context (Optional)

No response

"Show Conversations" inside the three dots does nothing

Describe the Bug

  • Install the extension
  • Chat with it for multiple conversations
  • Hit "Show Conversations"
  • Nothing happened

Expect: somewhere showing history chats

OS: Windows 10
VS Code: 1.76.2

Please tell us if you have customized any of the extension settings or whether you are using the defaults.

I use OpenAI API Key method with gpt-3.5-turbo model.

Additional context

No response

Resending prompts with code seem to lose the code block

Describe the Bug

At least in the case of an OpenAI 400 error, resending a prompt with a code block attached seems to lose the code block.

Where are you running VSCode? (Optional)

MacOS

Which OpenAI model are you using? (Optional)

gpt-4

Additional context (Optional)

No response

Markdown problems

Describe the Bug

  • bullet points are not shown in chat pane
  • export to markdown file produces HTML?
  • when uploading context to the API, previous API output is sent back as HTML?
image image

Where are you running VSCode? (Optional)

MacOS

Which OpenAI model are you using? (Optional)

None

Additional context (Optional)

Model used is Claude 2

Where can I find a way to use similar functions through browser login๏ผŸ

Describe the feature

I want to know where I can find a way to use similar functions through browser login. The problem I face is that the API key prompts me that I have already exceeded the quota, but I can access through the browser, so setting up the key is invalid for me. Is there any version that can be installed locally? I need this function. Thank you for your help.

404 NOT FOUND Request failed with status code 404

Describe the Bug

How to reproduce:

  1. Select some code in editor
  2. Use command: ChatGPT: Optimize
  3. Returns a 404 error

Environment details if needed:

Version: 1.76.2
Commit: ee2b180d582a7f601fa6ecfdad8d9fd269ab1884
Date: 2023-03-14T17:53:46.528Z
Electron: 19.1.11
Chromium: 102.0.5005.196
Node.js: 16.14.2
V8: 10.2.154.26-electron.0
OS: Darwin arm64 21.6.0
Sandboxed: No

Please tell us if you have customized any of the extension settings or whether you are using the defaults.

  1. Using OpenAI API Key method
  2. Model: gpt-3.5-turbo
  3. Using the default configuration without any changes

Additional context

Screenshot:

image

Something maybe worth noticing when I searched the issue on Google:

javascript - OpenAI ChatGPT (GPT-3.5) API error 404: "Request failed with status code 404" - Stack Overflow

"Stop" doesn't always work

Describe the Bug

There are instances where "stop" does not do anything. It may be related to re-sending a prompt.

Where are you running VSCode? (Optional)

MacOS

Which OpenAI model are you using? (Optional)

gpt-4

Additional context (Optional)

No response

Option to start new conversation

Describe the feature

Hi,

Until recently the UI had the option to start a new conversation which now seems to be missing in the new UI from these past few days

image

I default to use it from time to time when the AI answers become erratic

Move context menu items into menu

Describe the feature

GitHub's Copilot has similar context menu items to our extension. Except they put it inside a menu to avoid polluting the context menu too much. I think this would be a good idea for this extension to adopt as well.

Copilot
Screenshot of copilot's context menu items - neatly inside a menu.

Reborn
Screenshot of reborn's context menu items - they take up a lot of space.

This would also allow us to remove the "ChatGPT: " prefix, which would make it more readable as well.

The first time you use the plug-in, after entering apiBaseUrl, enter the key and click to verify, there will be no response.

Describe the Bug

The first time you use the plug-in, after entering apiBaseUrl, enter the key and click to verify, there will be no response. (If there is a response, the question will also report an error 401 exception. You need to restart vscode to use it normally.

Where are you running VSCode? (Optional)

None

Which OpenAI model are you using? (Optional)

None

Additional context (Optional)

No response

Code-Server / Browser Support

Describe the feature

Hi,

got 2 questions:
a) will it be possible to add a login for plan users to the plugin? I got the chatgpt+ plan and so i can use this service as much as i can, so dont want to fiddle with token payment.

b) i use code-server ( https://github.com/coder/code-server) on a vm linux machine to allways develop on the go. I installed the plugin there but seems not working. any clue?

Price Preview

Describe the feature

Instead of having to click on the tokens, it'd be nice to see the max. cost right below the input field like this:
mockup a (no โ‰ค)
or like this:
mockup b (with โ‰ค)

Pass full local files to OpenAI API

Describe the feature

One of the important limitations that I have in the current implementation is that I would like to pass json/csv files of input data and expected output data to OpenAI API and ask it to generate a function or method to perform the transformation. VSCode has access to all files in the current workspacefolder and being able to pass them to OpenAI API would be very helpful

Broken detection of code section boundaries

Describe the Bug

  1. There should be one code section rendered in conversation window, starting after "Here's an example code snippet..." line and ending before line "In this example..."
image
  1. Also there seems something fishy going on with closing paragraph HTML tag </p> at the end of rendered model responses, sometimes one or more > characters are being added. I don't know how to determine, if it is a bad model output, or a problem with rendering. Is there a way to examine raw, streamed, model responses?
image

There are other rendering and parsing problems with code sections that can be seen in attached file.

chat_6_transcript.md

Where are you running VSCode? (Optional)

MacOS

Which OpenAI model are you using? (Optional)

Llama 2 based model, running on a Mac, served with LM Studio

Additional context (Optional)

extension v3.19.1

Configure prompt templates or message prefix and suffix

Describe the feature

Allow the user to configure prompt templates or at minimum set prefix and suffix to user entered message.

VSCode with this extension comprises great UI for various models, not necessarily coming from OpenAI. Some of them require special formatted prompts, specific User, Assistant names, etc.

This extensions works with Claude 2 through the claude-to-chatgpt (https://github.com/jtsang4/claude-to-chatgpt) API adapter.

Also there is host of models that can be locally hosted, eg. with GPT4all, LM Studio or oobabooga/text-generation-webui. All of them come with OpenAI API-like servers built in.

Checked this extension with for example Lazarus 30B, WizardCoder 15B ;-)

Token counting

Describe the Bug

There's a couple remaining issues with token counting:

With a maxTokens of 4096, the OpenAI API should not 400, but for long prompts it still does this, indicating the token calculations are off.

The token usage panel "At most" may be off. As the "at least" goes up, the "at most" goes down, this somewhat makes sense since completions are more expensive than prompts. However, the "at most" drops below the "at least" which is obviously incorrect.

Please tell us if you have customized any of the extension settings or whether you are using the defaults.

todo: remove this section from the bug template

Additional context

No response

Make TokenCountPopup more readable

Describe the feature

current

I think the token count popup is a bit cluttered.
I've made some Mockups, with incremental changes:
Shorten token to price calculation and move info on how it's calculated into an info popup (which would show up when you hover over the info symbol)
Mockup 1
Improve calculation wording
Mockup 2
Shorten the tip at the bottom
Mockup 3

Maybe you could even make a table with the lowest and highest price.

OpenRouter API Support

I love this extension and I have been using it heavily as part of vscode. However, im slowly starting to see custom tuned models outperforming GPT in many coding aspects.

While its hard to integrate and cater to many models, there are tools like OpenRouter ( https://openrouter.ai/) that provide a unified API gateway to many such models including GPT.

While it might be a big effort, i think supporting open router can significantly help a lot of developers like me.

Happy to provide any testing or documentation support for this effort

HTTP 500 Error during API auth

Describe the Bug

I tried the new extension, I added my API key, and it gives the error:
`The server had an error while processing your request, please try again. (HTTP 500 Internal Server Error)
See https://platform.openai.com/docs/guides/error-codes for more details.

ChatGPT error 500: {
"error": {
"message": "Internal server error",
"type": "auth_subrequest_error",
"param": null,
"code": "internal_error"
}
}`

Steps to reproduce:

  1. Install the extension from the marketplace
  2. Open the extension
  3. Type "hello"
  4. Enter API key into the popup

Please tell us if you have customized any of the extension settings or whether you are using the defaults.

All defaults

Additional context

No response

HTML code in user prompt will inject into UI.

Describe the Bug

Messages from ChatGPT are expected to be injected into the UI to allow for markdown formatting. However, this should not be done for user messages for security and just the fact it makes HTML code disappear, potentially breaking the UI layout.

Where are you running VSCode? (Optional)

MacOS

Which OpenAI model are you using? (Optional)

gpt-3.5-turbo

Additional context (Optional)

No response

Provide gpt-4 model selection in the More Actions interface

Describe the Bug

gpt-4 model has been made generally available. So, it will be helpful if this is provided directly in the More Actions interface to be able to select alongside gpt-3.5-turbo and gpt-3.5-turbo-16K

Where are you running VSCode? (Optional)

Linux

Which OpenAI model are you using? (Optional)

None

Additional context (Optional)

No response

MD Formatting in Prompt

Describe the feature

Allow for optional markdown formatting in the prompt and render newlines by default.

Example prompt with newlines:
Prompt
In edit mode:
Edit mode

Improve cost overview

Describe the feature

current
The way I would interpret this is that fewer checks means lower cost, just like fewer quality checks means lower quality.
I feel like usually in an overview like this prices would be conveyed using dollar signs, so I propose to use that convention, instead of something ambiguous like checkmarks.
Maybe something like this:
mockup

Also publish on Open VSX Registry?

Describe the feature

I originally found gencay's vscode-chatgpt here: https://open-vsx.org/extension/gencay/vscode-chatgpt, and noticed that this reborn fork currently is only available on the Visual Studio Marketplace, would you also consider publishing it on the Open VSX Registry (https://open-vsx.org/) just like the original one?

The Open VSX Registry is an open-source registry for VSCode extensions that provides an alternative to the Visual Studio Marketplace. It's gaining popularity among developers who want to avoid the limitations of the Marketplace, such as slower reviews and more restricted publication policies. (according to GPT-4 ๐Ÿ˜†)

Thank you for your time to maintain this reborn and consideration!

How do I set the api key?

its not the extension settings. i saw that you changed to use "secure storage", whatever that means, i cant find anything about it on google and not in the entire vscode settings. am i missing something? how do i set the damn api key???

Organization ID

Describe the Bug

Hey,
how can I set Organization ID ?

Where are you running VSCode? (Optional)

MacOS

Which OpenAI model are you using? (Optional)

gpt-4

Additional context (Optional)

No response

"Add comments" command in context menu is "Add tests"

Describe the Bug

image

"ChatGPT: Add tests" text in context menu should be "ChatGPT: Add comments" - as this line executes "Add comments" command.

Where are you running VSCode? (Optional)

MacOS

Which OpenAI model are you using? (Optional)

None

Additional context (Optional)

No response

chatgpt.gpt3.apiBaseUrl No effect, it will result in forced exit

Describe the Bug

image
I used chatgpt.gpt3.apiBaseUrl to create my own proxy, but it seems that it is not running properly and cannot use and verify the key. When I opened the local proxy, there was no such issue

Where are you running VSCode? (Optional)

None

Which OpenAI model are you using? (Optional)

None

Additional context (Optional)

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.