Giter VIP home page Giter VIP logo

chatgpt-md's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-md's Issues

There is limit of response texts when querying the code generation

ISSUE: Below response text is uncompleted. There is limit of response text.

role::user
I want to write 3 websocket routes with authorization based on Redis as data store and passportJS with cookie.



<hr class="__chatgpt_plugin">

role::assistant

Sure, here's an example of how you can write 3 WebSocket routes with authorization based on Redis as data store and Passport.js with cookie:


const WebSocket = require('ws');
const passport = require('passport');
const LocalStrategy = require('passport-local').Strategy;
const redis = require("redis");
const client = redis.createClient();
const { Server } = require('@overnightjs/websocket');

// Define an object to store authorized users and their allowed topics
// Each key represents a topic, and the value is an array of authorized user IDs for that topic
const authorizedUsersByTopic = {
  'topic1': ['user1', 'user2'],
  'topic2': ['user2', 'user3'],
  'topic3': ['user1']
};

class TopicController {

  static async onConnect(socket) {
    // When a user connects, prompt them to enter their username
    socket.send(JSON.stringify({ type: 'prompt_username' }));
  }

  static async onMessage(socket, message) {
    const { type, payload } = JSON.parse(message);

    if (type === 'authenticate') {
      const { username, password } = payload;

      // Use Passport.js to authenticate the user's credentials
      passport.authenticate('local', function(err, user) {
        if (err || !user) return socket.send(JSON.stringify({ type: 'authentication_failed' }));

        // Once authenticated, add the user's information to the

<hr class="__chatgpt_plugin">

role::user

Default Template Causes Longer Lists to Become Incoherent

I used the default front matter template provided in the repo, modified for GPT-4:

temperature: 0.9
top_p: 1
max_tokens: 512
presence_penalty: 1
frequency_penalty: 1
stream: true
stop: null
n: 1
model: gpt-4

I ran into issues with output of list-style responses like so:

Screenshot 2023-04-07 at 11 49 46 AM

Near the end of the list, the formatting and text starts to become incoherent.


After a bit of testing, I discovered that frequency_penalty: 1 was not the default for the API, which is 0. Upon setting frequency_penalty to 0, lists became cleaner (and more closely match the output from ChatGPT).

I was able to reproduce this same issue with the GPT 3.5 Turbo model.

Possible Fixes

  • Include a default with lower or zero penalties.
  • Include many template options with pros and cons listed.

Add a stop button

Hey, loving this workflow!

It would be awesome to allow the user to cancel currently running API request

Streaming doesn't seem to work

I'm able to use the functions to have ChatGPT answer my initial question. But streaming, as you demonstrate in your video, isn't working for me. In the frontmatter, I have stream: true. Is there any trick to getting it to work? This is with Obsidian 1.1.16.

A separate issue is that sometimes the ChatGPT functions don't show up in the command palate. It's seemingly random.

Different color display for the response of "::assistant"

Currently, the distinction between ChatGPT's response is clearly marked by two horizontal lines. Yet, as the conversation grow longer and span into lengthy pages, it would be advantageous to swiftly spot ChatGPT's response section. This could be achieved by displaying the text in a distinct color or using another visual cue to set them apart. :)

Feature Request

There is a plugin called chat view which I think you can change the output and input to the that format

Allow custom endpoints

Azure now lets you host your own ChatGPT instances. It would be cool to allow this plugin to use those endpoints rather than the OpenAI one.

This would allow the plugin to be used in more sensitive environments

`Create new chat from template` does not work everywhere

The Create new chat from template command does not show up when the welcome page or canvas is open. It only seems to appear inside of markdown files.

Steps to reproduce:

  • Open a new Tab
  • Welcome page appears
  • Try to invoke the command

Feature Request: Option for Maximum Dialog Context

We could introduce an option for maximum historical context, such as the previous 0-7 messages, to avoid sending all the context every time. This would allow us to continue the conversation in the same window.
Currently, the default setting will send all existing messages to the server, which is satisfactory. However, it might be preferable to include an option.
Lastly, thank you for creating such great obsidian plugin!

This is a feature request.

Hello, Author. For countries where it is not possible to access ChatGPT's services directly, I would appreciate it if you could introduce a feature that allows us to customize the API proxy server address. This would greatly enhance accessibility and convenience for users in these countries.
Thank you!

chatgpt comments

sghould be able to have a comment block in a chat conversation where text is ignored.

comment block in form of =begin-chatgpt-md-comment and =end-chatgpt-md-comment

  • command to add new comment block
  • remove any comments before sending to chatgpt

gpt-4 model is not accesible

Hello!

I tried changing the model in the plugin options but even after setting setting it to gpt-4 is seems that the request is done via the 3.5 model. Checking in the OpenAI usage confirms this. (I do have access to gpt-4)

This is my frontmatter:

---
system_commands: ['I am a helpful assistant.']
temperature: 0.7
top_p: 1
max_tokens: 1500
presence_penalty: 1
frequency_penalty: 1
stream: true
stop: null
n: 1
model: gpt-4
---

Detecting Templates Recursively in a Folder

At this function

chatgpt-md/main.ts

Lines 919 to 933 in 1cd9234

getFilesInChatFolder(): TFile[] {
const folder = this.app.vault.getAbstractFileByPath(
this.settings.chatTemplateFolder
) as TFolder;
if (folder != null) {
return folder.children as TFile[];
} else {
new Notice(
`Error getting folder: ${this.settings.chatTemplateFolder}`
);
throw new Error(
`Error getting folder: ${this.settings.chatTemplateFolder}`
);
}
}

It could recurrsively call it self according to folder or file.

https://marcus.se.net/obsidian-plugin-docs/vault#is-it-a-file-or-folder

const folderOrFile = this.app.vault.getAbstractFileByPath("folderOrFile");  
  
if (folderOrFile instanceof TFile) {  
console.log("It's a file!");  
// add to TFile[]
} else if (folderOrFile instanceof TFolder) {  
console.log("It's a folder!");  
// do recurrsion
}

Example from GPT

const isTemplate = (filePath: string) => {
  // check if filePath is a valid template
  ...
};

// Create a utility function for recursive search
const detectTemplatesInFolder = async (folderPath: string) => {
  const files = await this.app.vault.getFiles();
  
  let templateFiles = [];
  
  for (let i=0; i<files.length; i++) {
    const file = files[i];
    
    if (!file.path.startsWith(folderPath)) { // ignore unrelated files/folders 
      continue;
    }
    
    if (file instanceof TFile && isTemplate(file.path)) { // found a template file
      templateFiles.push(file);
      
    } else if (file instanceof TFolder) { // found a sub-folder, do recursion
      const subfolderTemplates = await detectTemplatesInFolder(file.path);
      
      if(subfolderTemplates.length > 0 ) {
        templateFiles.push(...subfolderTemplates);
      }
    }
  }
  
 return templateFiles;
};

Custom endpoint

My configuration should not have any errors, right? I modified the API key in the settings panel. It seems like there is a problem with the custom terminal, but I don't know where the issue lies.

Looking forward to your reply, thank you.

image

Error message `${role}\\\\n\\\\n`;' is not one of ['system', 'assistant', 'user']` when asked chatgpt to to explain code snippet

Hi ,
Thank you for the plugin.
I came across this issue. Probably I am doing something incorrectly. Would appreciate it if you could help me . Thanks

I can attach source file if needed.

Preconditions:

  1. Install and enable plugin as per instructions https://github.com/bramses/chatgpt-md

To reproduce

  1. Create new note.
  2. Paste snippet 1
  3. Type question "Can you explain the above code?^"
  4. Run command "Chat GPT MD: Chat"
  5. Observe response in your file , as expected.
  6. Run command "Chat GPT MD: Add divider"
  7. Paste snippet 2
  8. Run command "Chat GPT MD: Chat"
  9. Observe error message
Error: issue calling OpenAI API, see error for more details: Error: Error: {"message":"'${role}\\\\n\\\\n`;' is not one of ['system', 'assistant', 'user'] - 'messages.5.role'","type":"invalid_request_error","param":null,"code":null}
    at ChatGPT_MD.callOpenAIAPI (plugin:chatgpt-md:148:13)

Snippet 1

console.log("Hello")

Snippet 2

	addHR(editor: Editor, role: string) {
		const newLine = `\n\n<hr class="__chatgpt_plugin">\n\nrole::${role}\n\n`;
		editor.replaceRange(newLine, editor.getCursor());

		// move cursor to end of file
		const cursor = editor.getCursor();
		const newCursor = {
			line: cursor.line,
			ch: cursor.ch + newLine.length,
		};
		editor.setCursor(newCursor);
	}

image

Expected result: Chat GPT should explain snippet 2 correctly.

Environment

  1. MacOS Ventura 13.2.1
  2. Obsidian 1.1.16
  3. ChatGPT MD plugin: 1.0.2

Can not continue conversion

Hi, I updated to the most recent version today.

I tried a few examples, it works perfectly at the beginning, but whenever I'd like to continue the conversion it will show error like this:

[ChatGPT MDI Error :: This model's maximum
context length is 4097 tokens. However, you
requested 4621 tokens (621 in the messages,
4000 in the completion). Please reduce the
length of the messages or completion.
issue calling OpenAl API, see console for more
details

Am I doing it wrong or I can only have 4097 tokens for the whole note?

The command "chatGPT MD: chat" should collect questions in respect to the separator <hr class="__chatgpt_plugin">

Hi there,
My markdown file contains many Q&A by chatGPT. It has more than 2000 words and 10,000 characters.
Then I created a new question as below example (NOTE: the plugin works well before reaching this point)

<hr class="__chatgpt_plugin">

My new question is here .... (it's about 200 words)

Then I executed the command "chatGPT MD: chat" and encountered this error

image

GUESS: the plugin is collecting all the texts from the beginning of the file.
EXPECTED BEHAVIOR: it should collect question content from the nearby separator <hr class="__chatgpt_plugin"> so that only my new question is sent to chatGPT and no error of token limit to occur.

Thank you very much for the good works!
Bests,

Infer title error

After using for a while, looks like I end up with 2 sessions with the same name:

app.js:1 Uncaught (in promise) Error: Destination file already exists!
    at e.<anonymous> (app.js:1:1102941)
    at app.js:1:235836
    at Object.next (app.js:1:235941)
    at a (app.js:1:234680)

Difficulty Installing Plugin

I am having trouble installing the plugin on my computer. I am not knowledgeable in programming and am facing some difficulty with the installation process. I have followed the instructions provided but still cannot seem to get it to work.

Can someone please provide me with some guidance on how to install this plugin successfully? Perhaps a step-by-step guide or some troubleshooting tips would be greatly appreciated.

Thank you in advance for your assistance.

image
image

Feature Request: add a command to clear conversation (Except frontmatter)

Hello!
I was wondering if it would be possible to add a command to clear conversation (excluding frontmatter). This would allow for a quick start to a new conversation without having to create a new file or manually select and delete the previous conversation (in order to save tokens).

Here's an example:
Frame 427319517

4096 token limit for chat note

I noticed the web version of gpt accept 4096 token each sentence, but one session allow us talking continuously with a quiet long context.
but this plugin limit the whole context to 4096 token, is this the limitation of API itself?

Question: GPT Likeness?

Hey, apologise on the double reach out as I did ask on Discord but didn't get a response

Does the plugin retain information feom previous conversations like the web version?

Also, am I able to make use of my subscription with CharGPT with this?

Customizable Title Templates for Infererred Titles

I would like to be able to set a template for inferred titles.
In my case i want the inferred title to be prepended with the current date (or the string of the former filename)

for example: 2022-04-10 A Conversation with an Enlightened AI.md

add organization option please!

We could introduce an option for using organization in post headers, for some user can only get access to gpt-4 from specifical organization.

Lastly, thank you for creating such great plugin!

about the gpt-4 model

Hello,
I updated the gpt-4 model, but when I modify the model option to "model: gpt-4", then I asked "which gpt version", it reply "gpt-3". Am I set the options correctly?

Feature request: default model

Thanks for the great work!

Can we add an option to set default model?
I understand this can be done by inserting the front matter; however, for some prompt conversion initiated within an existing note, the model is always got-3.5 and cannot meet my needs.

Feature Request: Option for Maximum Dialog Context

We could introduce an option for maximum historical context, such as the previous 0-7 messages, to avoid sending all the context every time. This would allow us to continue the conversation in the same window.
Currently, the default setting will send all existing messages to the server, which is satisfactory. However, it might be preferable to include an option.
Lastly, thank you for creating such great plugin!

Generate response below cursor instead of at note end

If your notes have a footer then this plugin will stream in the ChatGPT response to your prompt below it currently. I think it would be preferable if the response is placed below the cursor instead.

Otherwise amazing plugin, adding this to my vault was an absolute no-brainer.

Real Streaming

Hi!

I was wandering if the streaming functionality isn't actually real streaming but rather an animation, because the completions take quite a while to start.

If this is the case then maybe we can implement real streaming of the responses?

I found a working example with typescript here:
https://github.com/whichlight/chatgpt-api-streaming

Title inference and path

It seems that it tries to rename and move the file to the standard directory from the settings of the plugin.

My suggestion would be to let it work with any note in the vault and not change the path of the note. Only rename it.

ENOENT: no such file or directory, rename '/Users/dirkjan/Vaults/Dirkjan/GPT/G.md' -> '/Users/dirkjan/Vaults/Dirkjan/ChatGPT_MD/chats/Discussion about Broccoli and Favorite Color.md'

By the way, would it be possible to add "Dutch" as infer title language?

Feature Request: customize API server domain

Hi there! Thank you for creating this amazing plugin!❤️
I was wondering if it would be possible to add a feature that allows users to customize the API server domain. It would be really helpful for those who have specific domains. Thank again for your hard work! 🙌
Here's an example:
image

all steps done but..

hi, when I installed plugin sucessfuly, I tried to use it but "Issue calling OpenAI API.." message is show.
image. Probably, I had a command line opened which seemed to be blocking the plugin... (I tried reinstal without any cmd opened)

But I have a question about this error. Is it serverside problem or some plugin limit? This happened to me again after I changed the number of tokens.

image

I dont know how to make plugin online again. (wait?)

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.