Comments (6)
Hi @tophee,
Yes, this is because the plugin is writing directly to Obsidian's editor when stream is on (see https://github.com/bramses/chatgpt-md/blob/master/main.ts#L164-L170), as if you were manually "typing" the characters really fast.
In fact, I had to artificially slow the backtick character streaming in because Obsidian's editor processes them slightly slower than other characters (idk why). So what you're seeing is just the name of the game, unfortunately.
If it bothers you, you can set stream to false
and it should load in as one large block. Does that make sense?
from chatgpt-md.
You mean it’s a feature, not a bug?
In any case, the first thing I tried was to turn stream off but that didn’t stop it from streaming. Then i saw that it was still set to true in Default Chat Frontmatter so I changed it there too, but it still keeps streaming…
from chatgpt-md.
@tophee you have to set it to false, the default in the system is stream: true
.
EDIT: just checked, there seems to be a bug somewhere in the logic, I'll mark it for next release
You mean it’s a feature, not a bug?
haha, yeah, I suppose
from chatgpt-md.
fixed with https://github.com/bramses/chatgpt-md/releases/tag/1.1.1
from chatgpt-md.
Thanks for fixing this! But may I ask why you consider the behaviour with square brackets as a feature?
As I think about it, maybe I misunderstood you and it's not so much a feature but a the natural behaviour when streaming text into obsidian? In that case: are you planning to make it possible to turn this off? I'm not sure what kind of effort is required for this in terms of code, but I'd imagine it to be possible to somehow temporarily deactivate things like auto-completion (or things like that) during streaming, no?
BTW: do you know whether what we see in streaming mode is the as it is being produced more or less in real-time (which would mean that streaming provides faster responses) or is the stream fake in the sense that the entire answer is already produced before the streaming starts (which would mean that turning streaming mode off will not slow down responses).
from chatgpt-md.
In that case: are you planning to make it possible to turn this off? I'm not sure what kind of effort is required for this in terms of code, but I'd imagine it to be possible to somehow temporarily deactivate things like auto-completion (or things like that) during streaming, no?
I meant feature as in it's built into Obsidian. ChatGPT MD has no control over how Obsidian writes to its own editor (https://github.com/obsidianmd/obsidian-api/blob/master/obsidian.d.ts#L902). It merely takes data from the OpenAI response and appends it to the editor. Anything lower level would probably break CodeMirror or cause some other unforeseen issue.
Edit: That being said, if you do find a solution, I'd be happy to accept it, please feel free to PR!
BTW: do you know whether what we see in streaming mode is the as it is being produced more or less in real-time (which would mean that streaming provides faster responses) or is the stream fake in the sense that the entire answer is already produced before the streaming starts (which would mean that turning streaming mode off will not slow down responses).
As of now, the stream is fake, yes. I'm looking into an Event Source patch but that may or may not work, idk yet
from chatgpt-md.
Related Issues (20)
- Support embedded like smart connections
- More context, rolling context, ways to save tokens
- Feature Requests for Obsidian Plugin(save lots of tokens) HOT 1
- base url replacement support request
- Bug: only the `chat` command seems to be working HOT 2
- Define frontmatter for simple "Chat"-Command
- Endpoint on Azure
- No chatGPT related commanders appered in the dropdown menu. HOT 4
- Use Markdown syntax whenever possible HOT 2
- Allow use of gpt-4-32k model HOT 1
- It's impossible to tell what parameters are being used (default or frontmatter or something else)
- Using gpt-4 as default not possible HOT 4
- Is it possible to submit only the selected content to GPT, rather than the entire document? HOT 4
- Sorting of templates
- Better explain how to use
- Sanitize file title before setting it as the filename HOT 1
- [Feature request] Add supporting other models and APIs
- Plugin limits max tokens
- OpenRouter AI support
- Critical Failure - API calls no longer working in Obsidian HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from chatgpt-md.