bramses / chatgpt-md Goto Github PK
View Code? Open in Web Editor NEWA (nearly) seamless integration of ChatGPT into Obsidian.
License: MIT License
A (nearly) seamless integration of ChatGPT into Obsidian.
License: MIT License
I break up my information into a lot of different, smaller notes, which really limits how useful something like this is. I'd really love to have a GPT chat in one note while asking questions about other notes in my vault.
Hi there,
My personal workflow relies on file naming convention for further/later query (by my own custom scripts for that).
Is it possible that I configure the naming of chat files to custom format like yyyy-MM-dd hh:mm?
The plugin is very nice! Thank you very much!
Bests,
TK
ISSUE: Below response text is uncompleted. There is limit of response text.
role::user
I want to write 3 websocket routes with authorization based on Redis as data store and passportJS with cookie.
<hr class="__chatgpt_plugin">
role::assistant
Sure, here's an example of how you can write 3 WebSocket routes with authorization based on Redis as data store and Passport.js with cookie:
const WebSocket = require('ws');
const passport = require('passport');
const LocalStrategy = require('passport-local').Strategy;
const redis = require("redis");
const client = redis.createClient();
const { Server } = require('@overnightjs/websocket');
// Define an object to store authorized users and their allowed topics
// Each key represents a topic, and the value is an array of authorized user IDs for that topic
const authorizedUsersByTopic = {
'topic1': ['user1', 'user2'],
'topic2': ['user2', 'user3'],
'topic3': ['user1']
};
class TopicController {
static async onConnect(socket) {
// When a user connects, prompt them to enter their username
socket.send(JSON.stringify({ type: 'prompt_username' }));
}
static async onMessage(socket, message) {
const { type, payload } = JSON.parse(message);
if (type === 'authenticate') {
const { username, password } = payload;
// Use Passport.js to authenticate the user's credentials
passport.authenticate('local', function(err, user) {
if (err || !user) return socket.send(JSON.stringify({ type: 'authentication_failed' }));
// Once authenticated, add the user's information to the
<hr class="__chatgpt_plugin">
role::user
I just saw on the open AI website that a trailing space in the user prompt may reduce the quality of the response. So I'm just curious whether ChatGPT_MD is removing them before submission.
I used the default front matter template provided in the repo, modified for GPT-4:
temperature: 0.9
top_p: 1
max_tokens: 512
presence_penalty: 1
frequency_penalty: 1
stream: true
stop: null
n: 1
model: gpt-4
I ran into issues with output of list-style responses like so:
Near the end of the list, the formatting and text starts to become incoherent.
After a bit of testing, I discovered that frequency_penalty: 1
was not the default for the API, which is 0
. Upon setting frequency_penalty
to 0
, lists became cleaner (and more closely match the output from ChatGPT).
I was able to reproduce this same issue with the GPT 3.5 Turbo model.
Hey, loving this workflow!
It would be awesome to allow the user to cancel currently running API request
I'm able to use the functions to have ChatGPT answer my initial question. But streaming, as you demonstrate in your video, isn't working for me. In the frontmatter, I have stream: true. Is there any trick to getting it to work? This is with Obsidian 1.1.16.
A separate issue is that sometimes the ChatGPT functions don't show up in the command palate. It's seemingly random.
Currently, the distinction between ChatGPT's response is clearly marked by two horizontal lines. Yet, as the conversation grow longer and span into lengthy pages, it would be advantageous to swiftly spot ChatGPT's response section. This could be achieved by displaying the text in a distinct color or using another visual cue to set them apart. :)
There is a plugin called chat view which I think you can change the output and input to the that format
Azure now lets you host your own ChatGPT instances. It would be cool to allow this plugin to use those endpoints rather than the OpenAI one.
This would allow the plugin to be used in more sensitive environments
Hello, it's possible to add a feature that allow the chatgpt-md to perform in Canvas card?
It is currently possible to run multple inferences in parallel.
In streaming mode it is only possible to cancel one of them.
I am playing around but I havent figured a consistant command or set of commands to get the assistant mode on. I have used highlighted words, chat, or template.
chat mode works sometimes.
The Create new chat from template
command does not show up when the welcome page or canvas is open. It only seems to appear inside of markdown files.
Steps to reproduce:
We could introduce an option for maximum historical context, such as the previous 0-7 messages, to avoid sending all the context every time. This would allow us to continue the conversation in the same window.
Currently, the default setting will send all existing messages to the server, which is satisfactory. However, it might be preferable to include an option.
Lastly, thank you for creating such great obsidian plugin!
Hello, Author. For countries where it is not possible to access ChatGPT's services directly, I would appreciate it if you could introduce a feature that allows us to customize the API proxy server address. This would greatly enhance accessibility and convenience for users in these countries.
Thank you!
I'm not sure what is going on, but I think it might be related to [[ being output in stream mode and/or within a code block. I guess they need to be escaped somehow. See here: https://cln.sh/zkG7Zmsj
Hope this is useful.
I am not a native speaker of English and often write my notes in languages other than English on Obsidian. Therefore, I want to add a setting for performing "infer title" in other languages.
sghould be able to have a comment block in a chat conversation where text is ignored.
comment block in form of =begin-chatgpt-md-comment and =end-chatgpt-md-comment
Hello!
I tried changing the model in the plugin options but even after setting setting it to gpt-4 is seems that the request is done via the 3.5 model. Checking in the OpenAI usage confirms this. (I do have access to gpt-4)
This is my frontmatter:
---
system_commands: ['I am a helpful assistant.']
temperature: 0.7
top_p: 1
max_tokens: 1500
presence_penalty: 1
frequency_penalty: 1
stream: true
stop: null
n: 1
model: gpt-4
---
At this function
Lines 919 to 933 in 1cd9234
https://marcus.se.net/obsidian-plugin-docs/vault#is-it-a-file-or-folder
const folderOrFile = this.app.vault.getAbstractFileByPath("folderOrFile");
if (folderOrFile instanceof TFile) {
console.log("It's a file!");
// add to TFile[]
} else if (folderOrFile instanceof TFolder) {
console.log("It's a folder!");
// do recurrsion
}
Example from GPT
const isTemplate = (filePath: string) => {
// check if filePath is a valid template
...
};
// Create a utility function for recursive search
const detectTemplatesInFolder = async (folderPath: string) => {
const files = await this.app.vault.getFiles();
let templateFiles = [];
for (let i=0; i<files.length; i++) {
const file = files[i];
if (!file.path.startsWith(folderPath)) { // ignore unrelated files/folders
continue;
}
if (file instanceof TFile && isTemplate(file.path)) { // found a template file
templateFiles.push(file);
} else if (file instanceof TFolder) { // found a sub-folder, do recursion
const subfolderTemplates = await detectTemplatesInFolder(file.path);
if(subfolderTemplates.length > 0 ) {
templateFiles.push(...subfolderTemplates);
}
}
}
return templateFiles;
};
Hi ,
Thank you for the plugin.
I came across this issue. Probably I am doing something incorrectly. Would appreciate it if you could help me . Thanks
I can attach source file if needed.
Preconditions:
To reproduce
Error: issue calling OpenAI API, see error for more details: Error: Error: {"message":"'${role}\\\\n\\\\n`;' is not one of ['system', 'assistant', 'user'] - 'messages.5.role'","type":"invalid_request_error","param":null,"code":null}
at ChatGPT_MD.callOpenAIAPI (plugin:chatgpt-md:148:13)
Snippet 1
console.log("Hello")
Snippet 2
addHR(editor: Editor, role: string) {
const newLine = `\n\n<hr class="__chatgpt_plugin">\n\nrole::${role}\n\n`;
editor.replaceRange(newLine, editor.getCursor());
// move cursor to end of file
const cursor = editor.getCursor();
const newCursor = {
line: cursor.line,
ch: cursor.ch + newLine.length,
};
editor.setCursor(newCursor);
}
Expected result: Chat GPT should explain snippet 2 correctly.
Environment
Hi, I updated to the most recent version today.
I tried a few examples, it works perfectly at the beginning, but whenever I'd like to continue the conversion it will show error like this:
[ChatGPT MDI Error :: This model's maximum
context length is 4097 tokens. However, you
requested 4621 tokens (621 in the messages,
4000 in the completion). Please reduce the
length of the messages or completion.
issue calling OpenAl API, see console for more
details
Am I doing it wrong or I can only have 4097 tokens for the whole note?
Hi there,
My markdown file contains many Q&A by chatGPT. It has more than 2000 words and 10,000 characters.
Then I created a new question as below example (NOTE: the plugin works well before reaching this point)
<hr class="__chatgpt_plugin">
My new question is here .... (it's about 200 words)
Then I executed the command "chatGPT MD: chat" and encountered this error
GUESS: the plugin is collecting all the texts from the beginning of the file.
EXPECTED BEHAVIOR: it should collect question content from the nearby separator <hr class="__chatgpt_plugin">
so that only my new question is sent to chatGPT and no error of token limit to occur.
Thank you very much for the good works!
Bests,
After using for a while, looks like I end up with 2 sessions with the same name:
app.js:1 Uncaught (in promise) Error: Destination file already exists!
at e.<anonymous> (app.js:1:1102941)
at app.js:1:235836
at Object.next (app.js:1:235941)
at a (app.js:1:234680)
I am having trouble installing the plugin on my computer. I am not knowledgeable in programming and am facing some difficulty with the installation process. I have followed the instructions provided but still cannot seem to get it to work.
Can someone please provide me with some guidance on how to install this plugin successfully? Perhaps a step-by-step guide or some troubleshooting tips would be greatly appreciated.
Thank you in advance for your assistance.
Hello!
I was wondering if it would be possible to add a command to clear conversation (excluding frontmatter). This would allow for a quick start to a new conversation without having to create a new file or manually select and delete the previous conversation (in order to save tokens).
hello~
It doesn't show much at a time, You must enter the content and ask ChatGPT to continue
. Can it continue automatically and be quickly collated (excluding text for typing to continue)
Especially in Chinese, it feels like less content can be displayed at a time than in English.
I noticed the web version of gpt accept 4096 token each sentence, but one session allow us talking continuously with a quiet long context.
but this plugin limit the whole context to 4096 token, is this the limitation of API itself?
There are lots of plugins that do this, it would be good to know exactly how the plugin suggests notes so that users can see whether this plugin replicates other plugins they have or compliments them.
Hey, apologise on the double reach out as I did ask on Discord but didn't get a response
Does the plugin retain information feom previous conversations like the web version?
Also, am I able to make use of my subscription with CharGPT with this?
I would like to be able to set a template for inferred titles.
In my case i want the inferred title to be prepended with the current date (or the string of the former filename)
for example: 2022-04-10 A Conversation with an Enlightened AI.md
We could introduce an option for using organization in post headers, for some user can only get access to gpt-4 from specifical organization.
Lastly, thank you for creating such great plugin!
Hello,
I updated the gpt-4 model, but when I modify the model option to "model: gpt-4", then I asked "which gpt version", it reply "gpt-3". Am I set the options correctly?
Thanks for the great work!
Can we add an option to set default model?
I understand this can be done by inserting the front matter; however, for some prompt conversion initiated within an existing note, the model is always got-3.5
and cannot meet my needs.
We could introduce an option for maximum historical context, such as the previous 0-7 messages, to avoid sending all the context every time. This would allow us to continue the conversation in the same window.
Currently, the default setting will send all existing messages to the server, which is satisfactory. However, it might be preferable to include an option.
Lastly, thank you for creating such great plugin!
如何修改输出的字数,300字有点太短了;How to modify the output word count? 300 words are a bit too short.
Steps to reproduce:
ChatGPT_MD/chats
folderCreate new chat from template
from the command menuSteps to troubleshoot:
ChatGPT_MD/chats
folderIf your notes have a footer then this plugin will stream in the ChatGPT response to your prompt below it currently. I think it would be preferable if the response is placed below the cursor instead.
Otherwise amazing plugin, adding this to my vault was an absolute no-brainer.
Hi!
I was wandering if the streaming functionality isn't actually real streaming but rather an animation, because the completions take quite a while to start.
If this is the case then maybe we can implement real streaming of the responses?
I found a working example with typescript here:
https://github.com/whichlight/chatgpt-api-streaming
It seems that it tries to rename and move the file to the standard directory from the settings of the plugin.
My suggestion would be to let it work with any note in the vault and not change the path of the note. Only rename it.
ENOENT: no such file or directory, rename '/Users/dirkjan/Vaults/Dirkjan/GPT/G.md' -> '/Users/dirkjan/Vaults/Dirkjan/ChatGPT_MD/chats/Discussion about Broccoli and Favorite Color.md'
By the way, would it be possible to add "Dutch" as infer title language?
hi, when I installed plugin sucessfuly, I tried to use it but "Issue calling OpenAI API.." message is show.
. Probably, I had a command line opened which seemed to be blocking the plugin... (I tried reinstal without any cmd opened)
But I have a question about this error. Is it serverside problem or some plugin limit? This happened to me again after I changed the number of tokens.
I dont know how to make plugin online again. (wait?)
Thank you.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.