Comments (8)
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! 👋
Welcome to the Jupyter community! 🎉
from jupyter-ai.
I've been staring at code for hours, but could not find a clue to this mystery—the function generate_outline()
receives a description
as input, but when called by _generate_notebook()
it gets the input variable prompt
instead, which it looks like is created by PromptTemplate()
from langchain.prompts
. That has input_variables=["description"],
but I cannot find where this may come from.
from jupyter-ai.
@labarba Are you using just /generate
or the slash command followed by a prompt for what you want to generate. I tried using /generate <prompt>
and it works well, as shown below.
I have also tried simply using /generate
and it generates a random notebook but does not throw an error. And if the slash is a forward slash, it replies that it cannot process it. Maybe a reinstall will fix the problem? Hope this helps.
from jupyter-ai.
My exact input was:
/generate a lesson to teach beginners the essence of Pandas groupby(), including how to loop over grouped dataframes.
In the next input, I tried without the /generate
command, as follows:
Give me an outline for a lesson to teach beginners the essence of Pandas groupby(), including how to loop over grouped dataframes.
… and it gave me a response as expected (indicating my connection to the LLM is good via API).
This is my uni JupyterHub installation, so not under my direct control. The admins are not keen on rebuilding the image, so I have to give them a good reason.
from jupyter-ai.
@labarba It's good that your LLM API seems to be working just fine.
I tried your exact inputs (see below) and I suspect using /generate
does a better job, if you look at the results with and without the slash command (but that's a judgment call). The left hand side panel below does not use /generate
and the right hand side is the notebook generated by the /generate
command.
It sounds like a fresh install will solve this issue for you. Hope this helps.
from jupyter-ai.
@labarba I went ahead and reinstalled everything (including jupyterhub
) in a new environment on a linux machine and tested if jupyter-ai
works as expected after connecting remotely to jupyterhub
on that machine. I can confirm it all works well. (I am using JLab v4.1.6.) and the latest version of jupyter-ai
.
from jupyter-ai.
ha ha, meanwhile, I went ahead and installed jupyter-ai
and dependencies on my local machine and it worked well with the /generate
command, so while I convince the admin of our JupyterHub to reinstall I can do what I wanted to do on localhost!
from jupyter-ai.
That's good to hear, and confirms that it is easier to solve a bug than bureaucracy! I'll go ahead and close the issue.
from jupyter-ai.
Related Issues (20)
- gpt4all does not generate notebook (using /generate) HOT 1
- Default language model setting gives `openai.APIConnectionError: Connection error` HOT 1
- Settings saved confirmation message is not visible
- Errors in contacting model show up with code mini-toolbar in chat UI
- Extensible toolbar for code blocks in chat UI
- Remove hard-coded default list of model aliases in magic commands
- Chat input box styling is broken with long messages
- Show a loading indicator in the chat while awaiting reply from the model HOT 2
- The chat `-h` in chat commands is inconsistent and not very helpful HOT 1
- Add a command/shortcut to focus the chat input box
- Sending an empty message results in a silent server error HOT 1
- Error when trying to use huggingface models with older version of `huggingface-hub` HOT 1
- Render the example notebooks in the documentation?
- :bug: `chatgpt` not available anymore in `2.16.0` : _"Cannot determine model provider from model ID openai-chatgpt."_ HOT 5
- Custom file handler for the /learn command HOT 3
- Automatic logging/exporting for conversations in chat UI HOT 1
- Idea: interactive whiteboard in Jupyter AI for scribble prompts HOT 2
- api_version in GUI not considered, need to set OPENAI_API_VERSION HOT 1
- Unable to include webpages copy and pasted into context for language models HOT 4
- Accessing OpenAI assistants from jupyter-ai HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from jupyter-ai.