Giter VIP home page Giter VIP logo

jupyter-ai's Introduction

Installation | Documentation | Contributing | License | Team | Getting help |

PyPI version Downloads Build Status Build Status Documentation Status Crowdin GitHub Discourse Gitter Gitpod

Binder

An extensible environment for interactive and reproducible computing, based on the Jupyter Notebook and Architecture.

JupyterLab is the next-generation user interface for Project Jupyter offering all the familiar building blocks of the classic Jupyter Notebook (notebook, terminal, text editor, file browser, rich outputs, etc.) in a flexible and powerful user interface.

JupyterLab can be extended using npm packages that use our public APIs. The prebuilt extensions can be distributed via PyPI, conda, and other package managers. The source extensions can be installed directly from npm (search for jupyterlab-extension) but require an additional build step. You can also find JupyterLab extensions exploring GitHub topic jupyterlab-extension. To learn more about extensions, see the user documentation.

Read the current JupyterLab documentation on ReadTheDocs.

Important

JupyterLab 3 will reach its end of maintenance date on May 15, 2024, anywhere on Earth. To help us make this transition, fixes for critical issues will still be backported until December 31, 2024. If you are still running JupyterLab 3, we strongly encourage you to upgrade to JupyterLab 4 as soon as possible. For more information, see JupyterLab 3 end of maintenance on the Jupyter Blog.


Getting started

Installation

If you use conda, mamba, or pip, you can install JupyterLab with one of the following commands.

  • If you use conda:
    conda install -c conda-forge jupyterlab
  • If you use mamba:
    mamba install -c conda-forge jupyterlab
  • If you use pip:
    pip install jupyterlab
    If installing using pip install --user, you must add the user-level bin directory to your PATH environment variable in order to launch jupyter lab. If you are using a Unix derivative (e.g., FreeBSD, GNU/Linux, macOS), you can do this by running export PATH="$HOME/.local/bin:$PATH". If you are using a macOS version that comes with Python 2, run pip3 instead of pip.

For more detailed instructions, consult the installation guide. Project installation instructions from the git sources are available in the contributor documentation.

Installing with Previous Versions of Jupyter Notebook

When using a version of Jupyter Notebook earlier than 5.3, the following command must be run after installing JupyterLab to enable the JupyterLab server extension:

jupyter serverextension enable --py jupyterlab --sys-prefix

Running

Start up JupyterLab using:

jupyter lab

JupyterLab will open automatically in the browser. See the documentation for additional details.

If you encounter an error like "Command 'jupyter' not found", please make sure PATH environment variable is set correctly. Alternatively, you can start up JupyterLab using ~/.local/bin/jupyter lab without changing the PATH environment variable.

Prerequisites and Supported Browsers

The latest versions of the following browsers are currently known to work:

  • Firefox
  • Chrome
  • Safari

See our documentation for additional details.


Getting help

We encourage you to ask questions on the Discourse forum. A question answered there can become a useful resource for others.

Bug report

To report a bug please read the guidelines and then open a Github issue. To keep resolved issues self-contained, the lock bot will lock closed issues as resolved after a period of inactivity. If a related discussion is still needed after an issue is locked, please open a new issue and reference the old issue.

Feature request

We also welcome suggestions for new features as they help make the project more useful for everyone. To request a feature please use the feature request template.


Development

Extending JupyterLab

To start developing an extension for JupyterLab, see the developer documentation and the API docs.

Contributing

To contribute code or documentation to JupyterLab itself, please read the contributor documentation.

JupyterLab follows the Jupyter Community Guides.

License

JupyterLab uses a shared copyright model that enables all contributors to maintain the copyright on their contributions. All code is licensed under the terms of the revised BSD license.

Team

JupyterLab is part of Project Jupyter and is developed by an open community. The maintenance team is assisted by a much larger group of contributors to JupyterLab and Project Jupyter as a whole.

JupyterLab's current maintainers are listed in alphabetical order, with affiliation, and main areas of contribution:

  • Mehmet Bektas, Netflix (general development, extensions).
  • Alex Bozarth, IBM (general development, extensions).
  • Eric Charles, Datalayer, (general development, extensions).
  • Frédéric Collonval, WebScIT (general development, extensions).
  • Martha Cryan, Mito (general development, extensions).
  • Afshin Darian, QuantStack (co-creator, application/high-level architecture, prolific contributions throughout the code base).
  • Vidar T. Fauske, JPMorgan Chase (general development, extensions).
  • Brian Granger, AWS (co-creator, strategy, vision, management, UI/UX design, architecture).
  • Jason Grout, Databricks (co-creator, vision, general development).
  • Michał Krassowski, Quansight (general development, extensions).
  • Max Klein, JPMorgan Chase (UI Package, build system, general development, extensions).
  • Gonzalo Peña-Castellanos, QuanSight (general development, i18n, extensions).
  • Fernando Perez, UC Berkeley (co-creator, vision).
  • Isabela Presedo-Floyd, QuanSight Labs (design/UX).
  • Steven Silvester, MongoDB (co-creator, release management, packaging, prolific contributions throughout the code base).
  • Jeremy Tuloup, QuantStack (general development, extensions).

Maintainer emeritus:

  • Chris Colbert, Project Jupyter (co-creator, application/low-level architecture, technical leadership, vision, PhosphorJS)
  • Jessica Forde, Project Jupyter (demo, documentation)
  • Tim George, Cal Poly (UI/UX design, strategy, management, user needs analysis).
  • Cameron Oelsen, Cal Poly (UI/UX design).
  • Ian Rose, Quansight/City of LA (general core development, extensions).
  • Andrew Schlaepfer, Bloomberg (general development, extensions).
  • Saul Shanabrook, Quansight (general development, extensions)

This list is provided to give the reader context on who we are and how our team functions. To be listed, please submit a pull request with your information.


Weekly Dev Meeting

We have videoconference meetings every week where we discuss what we have been working on and get feedback from one another.

Anyone is welcome to attend, if they would like to discuss a topic or just listen in.

Notes are archived on GitHub Jupyter Frontends team compass.

jupyter-ai's People

Contributors

3coins avatar abbott avatar adriens avatar andrii-i avatar apurvakhatri avatar aws-khatria avatar bjornjorgensen avatar cloutier avatar dlqqq avatar droumis avatar eduarddurech avatar ellisonbg avatar eltociear avatar garsonbyte avatar giswqs avatar jamesjun avatar jasonweill avatar jmkuebler avatar jtpio avatar krassowski avatar mahdidavari avatar michaelchia avatar minrk avatar mschroering avatar pre-commit-ci[bot] avatar srdas avatar startakovsky avatar thorhojhus avatar tom-a-lynch avatar ya0guang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jupyter-ai's Issues

Improvements to user identity

When this extension is run by a single user without collaboration model enables in JupyterLab, it is a bit confusing to see the anonymous planet name for yourself. I was thinking of how we can improve this for the user and attempt to use their real name or username. It looks like os.getlogin() can return this information. This would probably be an upstream contribution to JupyterLab itself though, but I think it would really help the user experience for single users.

When this is run in collaboration mode, I think it would be helpful to show the user which user they are in the chat, so instead of showing something like "Anonymous Callisto" show "Anonymous Callisto (me)" for just your username. I think the awareness API of JupyterLab can be to see if the user in the current user.

Error when asking an AI to analyze TypeScript code using a Python kernel

Description

When using a Python kernel, I cannot use ChatGPT via the %%ai magic to analyze TypeScript code. The curly braces {} are being interpreted by the magic command, even if they are in a Python """ blockquote block.

Reproduce

  1. Create a new notebook.
  2. Run a cell containing %load_ext jupyter_ai.
  3. Run a cell with TypeScript in it. (See example below)
  4. See error (See example below.

Example input:

%%ai chatgpt
Analyze the following code, tell me what it does, and tell me something I can do to improve it.

"""
private _addToolbar(model: ICellModel): void {
  const cell = this._getCell(model);

  if (cell) {
    const toolbarWidget = new Toolbar();
    toolbarWidget.addClass(CELL_MENU_CLASS);

    const promises: Promise<void>[] = [];
    for (const { name, widget } of this._toolbar) {
      toolbarWidget.addItem(name, widget);
      if (
        widget instanceof ReactWidget &&
        (widget as ReactWidget).renderPromise !== undefined
      ) {
        (widget as ReactWidget).update();
        promises.push((widget as ReactWidget).renderPromise!);
      }
    }

    // Wait for all the buttons to be rendered before attaching the toolbar.
    Promise.all(promises)
      .then(() => {
        toolbarWidget.addClass(CELL_TOOLBAR_CLASS);
        (cell.layout as PanelLayout).insertWidget(0, toolbarWidget);

        // For rendered markdown, watch for resize events.
        cell.displayChanged.connect(this._resizeEventCallback, this);

        // Watch for changes in the cell's contents.
        cell.model.contentChanged.connect(this._changedEventCallback, this);

        // Hide the cell toolbar if it overlaps with cell contents
        this._updateCellForToolbarOverlap(cell);
      })
      .catch(e => {
        console.error('Error rendering buttons of the cell toolbar: ', e);
      });
  }
}
"""

Error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[5], line 1
----> 1 get_ipython().run_cell_magic('ai', 'chatgpt', 'Analyze the following code, tell me what it does, and tell me something I can do to improve it.\n\n"""\nprivate _addToolbar(model: ICellModel): void {\n  const cell = this._getCell(model);\n\n  if (cell) {\n    const toolbarWidget = new Toolbar();\n    toolbarWidget.addClass(CELL_MENU_CLASS);\n\n    const promises: Promise<void>[] = [];\n    for (const { name, widget } of this._toolbar) {\n      toolbarWidget.addItem(name, widget);\n      if (\n        widget instanceof ReactWidget &&\n        (widget as ReactWidget).renderPromise !== undefined\n      ) {\n        (widget as ReactWidget).update();\n        promises.push((widget as ReactWidget).renderPromise!);\n      }\n    }\n\n    // Wait for all the buttons to be rendered before attaching the toolbar.\n    Promise.all(promises)\n      .then(() => {\n        toolbarWidget.addClass(CELL_TOOLBAR_CLASS);\n        (cell.layout as PanelLayout).insertWidget(0, toolbarWidget);\n\n        // For rendered markdown, watch for resize events.\n        cell.displayChanged.connect(this._resizeEventCallback, this);\n\n        // Watch for changes in the cell\'s contents.\n        cell.model.contentChanged.connect(this._changedEventCallback, this);\n\n        // Hide the cell toolbar if it overlaps with cell contents\n        this._updateCellForToolbarOverlap(cell);\n      })\n      .catch(e => {\n        console.error(\'Error rendering buttons of the cell toolbar: \', e);\n      });\n  }\n}\n"""\n')

File /local/home/brgrange/micromamba/envs/docchat/lib/python3.11/site-packages/IPython/core/interactiveshell.py:2430, in InteractiveShell.run_cell_magic(self, magic_name, line, cell)
   2428 with self.builtin_trap:
   2429     args = (magic_arg_s, cell)
-> 2430     result = fn(*args, **kwargs)
   2432 # The code below prevents the output from being displayed
   2433 # when using magics with decodator @output_can_be_silenced
   2434 # when the last Python token in the expression is a ';'.
   2435 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):

File /local/home/brgrange/micromamba/envs/docchat/lib/python3.11/site-packages/jupyter_ai/magics.py:201, in AiMagics.ai(self, line, cell)
    199 # interpolate user namespace into prompt
    200 ip = get_ipython()
--> 201 prompt = prompt.format_map(FormatDict(ip.user_ns))
    203 # configure and instantiate provider
    204 ProviderClass = get_provider(provider_id)

ValueError: unexpected '{' in field name

Expected behavior

TypeScript code was analyzed by the AI.

Setting to toggle ENTER, SHIFT+ENTER shortcuts in chat UI

Problem

I would like ENTER to send a message in the Jupyter AI chat UI, but it adds a new line.

Proposed Solution

Provide an option to switch the shortcut behavior.

When the option is disabled, ENTER adds a new line, and SHIFT+ENTER sends the message.

When the option is enabled, ENTER sends the message, and SHIFT+ENTER adds a new line.

Additional context

If #85 is fixed, the hints should reflect the current preference.

macOS UI conventions treat ENTER (typically on the numeric keypad) and RETURN (typically in the same row as the middle row of letters) as two separate keys. On Windows and Linux, even though the keys on the numpad and in the letter-key row send different scan codes, UIs tend to treat them identically.

Slack's "Advanced" preferences pane has options to change this behavior, and to handle code blocks bounded by ``` specially:

image

better Chat UI input

The Chat UI has significant improvements that can be made:

  • Shift + Enter should send the current message (fixed by #70)
  • The text input should have placeholder text like "Ask Jupyter AI anything..."
  • The "Send" button should be filled (i.e. blue with a white icon, instead of transparent with a blue icon)
  • The "Send" button should have helper text that appears on hover saying "Send (Shift + Enter)"

Stable diffusion support with --format image

Problem

Users cannot run image models using Stable Diffusion using Jupyter AI, because no existing output accommodates images.

Proposed Solution

Add a --format image option for the %%ai magic command.

If the AI model outputs a URL, -f image should cause the output cell to display the image by remotely loading that URL as an inline image, either using markdown ![alt text](image URL) syntax or HTML <img alt="alt text" src="image URL" /> syntax.

If the AI model outputs binary data, and -f image is specified, the output cell should display the binary data returned. If the HTTP call to the model's API specifies a MIME type in its response headers, that MIME type should be used to decode the binary data.

Update the example notebooks and documentation with information about -f image.

Additional context

The option should apply a prompt template (#42) that should tell the AI model to output an image only, without explanatory text.

Scroll pinning in Chat UI

  • If user is already at the bottom of the overflow container, make sure the viewport stays anchored to the bottom upon receipt of a new message
  • If the user is in the middle of a overflow container, show a badge "New message" at the bottom of the viewport that informs users that a new message has arrived
    • This badge should, on click, scroll to the bottom of the overflow container
  • Do this in a way that supports most browsers

It's fairly difficult to implement intelligent scrolling in a correct and browser-independent manner. The first point is achievable via some overflow-anchor hacks, but these don't work on Safari.

Add "explain the last error" magic command

Problem

If a user just ran a cell and received an error that's captured in the Err array (#41), they might want to explain it quickly.

Proposed Solution

Add a magic command such as %%explainerror that takes an AI model as a parameter, and that sends a prompt to an AI model to explain the most recent error encountered.

Err[n] is undocumented

Description

The special variable Err, introduced in a fix for #41, is not yet present in the docs.

Expected behavior

Add an example of how Err[n] is used to the docs.

In Python 3.10 on macOS M1 Pro, error when starting JupyterLab

Description

When using Python 3.10 on a Mac laptop on Apple Silicon (M1 Pro), I get the following error after regenerating my hatch env and trying to start JupyterLab:

2023-04-17 10:57:22,884	ERROR services.py:1195 -- Failed to start the dashboard: Failed to start the dashboard, return code 1
Failed to read dashboard log: [Errno 2] No such file or directory: '/tmp/ray/session_2023-04-17_10-57-21_738793_69190/logs/dashboard.log'
2023-04-17 10:57:22,884	ERROR services.py:1196 -- Failed to start the dashboard, return code 1
Failed to read dashboard log: [Errno 2] No such file or directory: '/tmp/ray/session_2023-04-17_10-57-21_738793_69190/logs/dashboard.log'
Traceback (most recent call last):
  File "/Users/jweill/Library/Application Support/hatch/env/virtual/jupyter-ai-monorepo/YuaVGges/jupyter-ai-monorepo/lib/python3.10/site-packages/ray/_private/services.py", line 1167, in start_api_server
    with open(dashboard_log, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/ray/session_2023-04-17_10-57-21_738793_69190/logs/dashboard.log'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/jweill/Library/Application Support/hatch/env/virtual/jupyter-ai-monorepo/YuaVGges/jupyter-ai-monorepo/lib/python3.10/site-packages/ray/_private/services.py", line 1178, in start_api_server
    raise Exception(err_msg + f"\nFailed to read dashboard log: {e}")
Exception: Failed to start the dashboard, return code 1
Failed to read dashboard log: [Errno 2] No such file or directory: '/tmp/ray/session_2023-04-17_10-57-21_738793_69190/logs/dashboard.log'
2023-04-17 10:57:23,226	INFO worker.py:1538 -- Started a local Ray instance.
[2023-04-17 10:57:23,411 E 69190 732428] core_worker.cc:179: Failed to register worker 01000000ffffffffffffffffffffffffffffffffffffffffffffffff to Raylet. IOError: [RayletClient] Unable to register worker with raylet. No such file or directory

Dev setup is not working

Description

While installing jupyter-ai monorepo in a new conda environment jlpm setup:dev is not working with error message below

yarn run v1.21.1
$ lerna run setup:dev --stream --concurrency=1
/bin/sh: lerna: command not found
error Command failed with exit code 127.

Reproduce

  1. Create new conda environment
  2. Follow installation instructions from README.md until jlpm setup:dev fails
Diagnostics

jlpm setup:dev fails because no pre or post-install scripts are run on 1st hatch shell. Installation workaround sequence that works:

  1. clone repo
  2. use pip install hatch
  3. hatch shell (base env is created and activated)
  4. stop the new base env process with exit or Ctrl+D
  5. hatch env remove default
  6. hatch shell (dependencies are installed and next steps steps suggested in README jlpm setup:dev and jlpm dev work fine)

hatch

Expected behavior

jlpm setup:dev is working when following installation instructions README.md

Context

Operating System and version: MacOS 12.6.3
Browser and version: Chrome version 109.0.5414.87 (Official Build) (arm64)
JupyterLab version: head-of-master (WIP 4.0.0)

cannot call SageMaker Endpoint in magics

Description

Reproduce

I setup an SageMaker endpoint within AWS classic notebook instance. And I am able to query it through regular way.

When I call %%ai sagemaker-endpoint:sm_llm Write some JavaScript code that prints "hello world" to the console.

It says:
ValidationError: 2 validation errors for SmEndpointProvider content_handler field required (type=value_error.missing) __root__ Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)

Here is my definition of sm_llm:
`
parameters = {
"max_length": 200,
"num_return_sequences": 1,
"top_k": 250,
"top_p": 0.95,
"do_sample": False,
"temperature": 1,
}

class ContentHandler(ContentHandlerBase):
content_type = "application/json"
accepts = "application/json"

def transform_input(self, prompt: str, model_kwargs = {}) -> bytes:
    input_str = json.dumps({"text_inputs": prompt, **model_kwargs})
    return input_str.encode('utf-8')

def transform_output(self, output: bytes) -> str:
    response_json = json.loads(output.read().decode("utf-8"))
    return response_json["generated_texts"][0]

content_handler = ContentHandler()

sm_llm =SagemakerEndpoint(
endpoint_name=MODEL_CONFIG["huggingface-text2text-flan-t5-xxl"]["endpoint_name"],
region_name="us-west-2",
model_kwargs=parameters,
content_handler=content_handler
)
`

Expected behavior

Context

  • Operating System and version:
  • Browser and version:
  • JupyterLab version:
Troubleshoot Output
Paste the output from running `jupyter troubleshoot` from the command line here.
You may want to sanitize the paths in the output.
Command Line Output
Paste the output from your command line running `jupyter lab` here, use `--debug` if possible.
Browser Output
Paste the output from your browser Javascript console here, if applicable.

Extension for local generators

I'm not sure if this a a feature request or question about whether you might expect, or are hoping to see, support for other models being submitted as PRs to this repo or as plugin modules that can be added from third party installers.

Recent days have seen powerful hints that the days of running LLMs on the desktop have arrived sooner than we might have expected ( https://simonwillison.net/2023/Mar/11/llama/ ). Image generaators are also available for local use, either bundled as an an application or as a local web service. For example, automatic111/stable-diffusion-webui provides quite a complex UI which suggests their might be quite a rich API available that jupyter-ai could hook into.

So as a feature request, I guess my issue is: it would be useful to have modules for stable diffusion running locally and for one of the new "portable" LLMs running locally.

(I also wonder about jupyterlite in-browser exetnensions, eg wrapping something like https://xenova.github.io/transformers.js/ ?)

As a question about third party plugins, where eg there is a module shipped to support the stable--diffusion-web-ui shipped from the automatic111 org, how would that work?

Possibly related to this, I note that jupyterlite, which originally baked in pyolite kernel, now seems to be re-engineering so that all kernels are in their own repo, and jupyterlite is just an "empty" container. Would that be a more scaleable approach for this project?

Add magic command to create aliases for AI models

Problem

Currently, some AI models have aliases. For example, gpt2 is an alias for huggingface_hub:gpt2. These are defined in a constant in our Python code:

MODEL_ID_ALIASES = {
"gpt2": "huggingface_hub:gpt2",
"gpt3": "openai:text-davinci-003",
"chatgpt": "openai-chat:gpt-3.5-turbo",
"gpt4": "openai-chat:gpt-4",
}

To add a new alias, one would have to modify the file above and make a new release of Jupyter AI for everyone. There are no aliases for local usage.

Proposed Solution

Add a command %%ai alias <name> <value> to create custom aliases. These aliases persist for as long as the kernel remains running.

Add a command such as %%ai alias list that shows users which aliases they have defined. %%ai list (#28) may also display these aliases.

Update the example notebooks and documentation with information about this new command.

Document /generate, /ask, /learn commands

Problem

Users reading the docs don't know about the /generate (formerly /autonotebook; #90), /ask or /learn (#94) commands.

Proposed Solution

Update the user docs to explain how these commands work, with examples.

Azure Open AI Support?

Problem

We subscribe to Azure Open AI Services, which have the same models available.

Proposed Solution

Add an integration for Azure Open AI Services either by further configuring the Open AI, or as a 1st class integration.

Additional context

dev setup clobbers playground/config.py

Problem

Running jlpm setup:dev clobbers the existing playground/config.py file, deleting any configuration a developer may have specified.

Proposed Solution

Don't do that.

Backend APIs for chat configuration

Backend APIs that support

  1. Get config that returns the current chat configuration with selected chat model, embedding model and any api keys.
  2. Update config that let's the UI update any part of the configuration.
  3. Delete config that let's the UI clear the configuration, so user can start over.

Move magic command logic to jupyter_ai_magic

Problem

Inside the monorepo, magic command logic is intermingled with other code in the jupyter_ai package.

Proposed Solution

Move the magic command logic to a new package, jupyter_ai_magic.

Add prompt templates for formats so users don't have to constrain the output

Problem

When users customize the output format using the -f/--format option to the magic command, the prompt may still cause the AI to output in a format that the magic command can't display. For example, if the user asks for HTML markup, the AI model may generate explanatory text that is not in HTML format.

Proposed Solution

For each supported output format, write a prompt template. Apply the prompt template to the specified prompt before sending it to the AI model.

In the examples below, the prompt template is indicated in bold.

%%ai anthropic:claude-v1.2 -f html
Create a square using SVG with a black border and white fill.

… would send:

Create a square using SVG with a black border and white fill. The output should be in HTML format only, with no text before or after.

%%ai chatgpt -f math
Generate the 2D heat equation

… would send:

Generate the 2D heat equation

The output should be in LaTeX surrounded by $$. Do not include an explanation.

Document chat UI

Problem

The docs do not mention the chat UI at all.

Proposed Solution

Add instructions, with screen shots, about how to use the chat UI to the user docs.

Add Python script generation to /generate

Right now, the /generate command in the Chat UI only generates notebooks. We should use agents and tools to allow this to create other file types, like Python scripts. Could also extend it to other file types as well (JSON, CSV, HTML, image, etc.) but let's start with Python scripts.

Add --format code that inserts new code cell using ipylab

Problem

When the notebook author wants a generative AI model to generate code, the output is in markdown or raw (text) format, so the notebook author cannot run the generated code as they can run other code in their notebook.

Proposed Solution

Add a --format code format option. When the user runs a %%ai magic command with this option, Jupyter AI inserts a new code cell using ipylab that contains the source code produced.

Update the example notebooks and documentation to include information about --format code.

Additional context

The --format code option should apply a prompt template to ensure that only source code, with no plain-text explanations, get output. See #42.

Missing Statement of Goals and User Documentation - "Getting Started", "Next steps", etc.

Perhaps it is because this is a brand-new project, but there is not enough information for learn what this is supposed to do, and not enough for me to get started using, testing, or contributing to this.

I see several subrepositories, but those have no descriptions either, only links to themselves and the top-level repo. It looks like just a project template with some project stub repos. It is not (yet) for Python users, because almost all of the code is written in Typescript. (https://github.com/jupyterlab/jupyter-ai/tree/main/packages)

Can someone create a statement of goals, status, and initial user documentation?

That will help people learn how to use it and possibly contribute to it.

Thank you.

Rich Lysakowski

Create Err variable so a user can pass a traceback to the LLM

Problem

Jupyter notebook authors would like to pass an error generated by their code to a large language model (LLM) so that they can understand the error better.

  1. User runs code
  2. Code throws an error
  3. User wants to ask the LLM what the error means, without copying and pasting the error into a new prompt

Proposed Solution

Create a special Err list variables, with the same indexes as the In and Out variables provided by IPython. When a user runs code in cell In[n] that raises an uncaught exception, information about that exception is captured, as a string, in Err[n], and still displayed in the output area.

improve README landing page

The README is often the first thing users see when discovering a new project. It's imperative that we make it as friendly as possible. Here are steps that can be taken to do so:

  • Remove "check release" badge from top of the README
    • This is redundant since GitHub shows CI badges at the top of a repo anyways
  • Rewrite the README to have the following format
    • Catchy one-liner summarizing Jupyter AI's intent and what solution it offers
    • Screenshot that covers all of the interfaces Jupyter AI offers
    • One paragraph explanation of Jupyter AI that elaborates on the preceding one-liner
  • Remove contributor docs from README and direct contributors to online docs

Improve code styling in markdown

Right now the code blocks in the chat UI are styled using react markdown. This uses a quite different visual style than it typical in markdown code blocks in Jupyter (different background, syntax highlighting, typography, etc.). We should figure out how to use the standard styling of Jupyter markdown code blocks. We may need to build our own extension to react markdown for this.

Add /help command to chat UI

As we add more commands (/clear, /autonotebook, /filesytem, etc.) users will need better integrated help. I propose that we create a /help command that provides that to users in a chat reply.

Current build doesn't show python code errors in terminal

Problem

The current build does not catch any python syntax errors, missing imports etc. during the build. Additionally, there is no python lint support in the current setup. Due to both of these missing features, contributors will have to manually observe and fix syntax errors in python.

Solution

  • Add linting support to build
  • Add static analyzers for example mypy to aid discovery of syntax errors, imports etc.

Change chat message timestamps to 12-hour or 24-hour time

Problem

Relative timestamps take up substantial space and, especially when a message was sent less than a minute ago, can be distracting with their frequent updates.

Proposed Solution

Use HH:mm format, in either 12-hour or 24-hour time, for timestamps in chat messages.

Additional context

We may be able to use the browser's locale for time formatting.

jlpm dev-install fails: No matching distribution found for ray==2.2.0

Description

With a fresh Conda environment and after removing my local Hatch environment, when I attempt to run jlpm dev-install using tip-of-main, I get the following error:

@jupyter-ai/core: ERROR: Could not find a version that satisfies the requirement ray==2.2.0 (from jupyter-ai) (from versions: none)
@jupyter-ai/core: ERROR: No matching distribution found for ray==2.2.0

Attempting to run pip install ray==2.2.0 from my conda env (outside of Hatch) also fails:

(jupyterlab-0416) (hostname):jupyter-ai jweill$ pip install ray==2.2.0
ERROR: Could not find a version that satisfies the requirement ray==2.2.0 (from versions: none)
ERROR: No matching distribution found for ray==2.2.0

Context

  • Operating System and version: macOS 12.6.4 "Monterey" for Apple Silicon (M1 Pro)

Open chat UI as full panel

For quick usage, the chat UI panel as a side panel makes a lot of sense. However, for some usage cases, that starts to be a ton of content for a side panel. I believe we should allow the user to open the chat UI as a full main area panel. We could do this by allowing them to move it between the side panels and main area ("or") or by allowing them to have it in both places if they want ("and"). The second is probably easier given that could just show a card on the launcher.

Initial message/placeholder in chat panel

Problem

Upon initial load, the chat panel is completely blank, with no placeholder text to indicate what it is. (See jupyterlab/jupyterlab#13988 for a related issue in JupyterLab, since fixed.)

image

Proposed Solution

Provide either a placeholder that would appear in an otherwise totally blank chat panel, or have the chatbot provide an initial message to suggest that you can send it a message.

Additional context

See also #55, which suggests a placeholder for the chat area's message box.

Register any LangChain chain as a magic with %ai register

Problem

Users must write custom Python code in this repository to add a new LangChain chain or AI model.

Proposed Solution

Add a %ai register command that registers a LangChain chain with a new name.

Registered LangChain chains should show up when the user runs %ai list.

Modify the example notebooks and documentation to refer to this new command.

Provide hints about ENTER, SHIFT+ENTER during chat entry

Problem

Especially to users more familiar with other chat programs, it is not safe to assume that a user will know to press SHIFT+ENTER to send a message, as ENTER adds a new line.

Proposed Solution

When the user has typed anything into the chat UI text box, provide a hint below the text box stating that SHIFT+ENTER will send the message, and/or that ENTER adds a new line.

Additional context

When I have typed information into Slack's message box, using default settings on macOS, I see "Shift + Return to add a new line" below the text box:

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.