Giter VIP home page Giter VIP logo

napari-chatgpt's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

napari-chatgpt's Issues

Extensibility

Hi @royerloic ,

thanks again for paving this path. I played a bit with your code and found this place that could potentially profit from flexible extensibility. I'm wondering how you would approach this and if you're interested in a PR in that direction.

Use case: Assume someone has the napari-assistant installed. The assistant can list functions (sorted in categories such as denoising, segmentation, etc) and you can search in their docstrings for terms such as 'nuclei'. Combining it with Omega, one could then ask Omega Which functions do you have available for segmenting nuclei? and Omega could list these functions. We could then afterwards ask Omega to add one specific function as widget to the viewer or to execute it directly. This could lead to better code than random assemblies of scikit-image and opencv snippets. There are some hundred of these functions spread over the napari-assistant ecosystem and thus, I could imagine that this way of interacting with it might be useful.

image

The question is: how should one add a napari-assistant extension to Omega? A PR is one option and I'm wondering if you thought about an alternate way for making Omega openly extensible.

Looking forward to hear what you think.

Best,
Robert

iOS incompatible architecture

Hi there,

I tried to install your plugin via pip, which worked (no error message), but can't run Napari now. When being in the active environment and try to open Napari via 'napari' I receive an error of an incompatible architecture (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')). I have a Mac iOS Ventura 13 with an M1 chip and have also installed the apple_wrapper: mamba install -c conda-forge ocl_icd_wrapper_apple.

Anything I additionally have to install?

Thanks!

What's the purpose of `Save chats as Jupyter notebooks` checkbox?

Hi Loic and other contributors,

I just realized that if I uncheck Save chats as Jupyter notebooks, napari will throw an error and won't pop up in the web browser. (and the program won't terminate even if I close napari)

├ Error: 'NoneType' object has no attribute 'register_snapshot_function'
├ Omega failed to start. Please check the console for more information.
Traceback (most recent call last):
  File "/opt/anaconda3/envs/omega/lib/python3.10/site-packages/napari_chatgpt/_widget.py", line 467, in _start_omega
    self.server = start_chat_server(self.viewer,
  File "/opt/anaconda3/envs/omega/lib/python3.10/site-packages/napari_chatgpt/chat_server/chat_server.py", line 303, in start_chat_server
    notebook.register_snapshot_function(bridge.take_snapshot)
AttributeError: 'NoneType' object has no attribute 'register_snapshot_function'

This makes me wonder what was the intention of making it uncheckable.

Problems setting up Napari ChatGPT

Hi Loic,

we are trying to set up the Omega - Napari/ChatGPT in our GPU and we are in trouble.

We first open Conda

(base) C:\Users\IBMB-EMB-21>conda activate napari-chatgpt
(napari-chatgpt) C:\Users\IBMB-EMB-21>napari

Napari opens fine

we call the plugin, open it and we get an error code calling for an API key

|-> No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.
NoneType: None

so, we created an API Key in openAI.com

The problem is that we do not know how to set the new API key in the code and we can not proceed further.
if we keep clicking the omega plugin in napari we keep getting the same message in the command window

|-> No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.
NoneType: None

no new window for introducing the API key is shown in napari

What can we do? can you help?

napari-chatgpt is missing QT bindings

Hi,

I'm really excited to try your plugin, however, following the instructions:

conda create -y -n napari-chatgpt -c conda-forge python=3.9
conda activate napari-chatgpt
pip install napari  
git clone https://github.com/royerlab/napari-chatgpt.git
cd napari-chatgpt
pip install -e .

The resulting napari program terminates with an error condition:

(napari-chatgpt) jrh630@SCI1012515 napari-chatgpt % napari
Traceback (most recent call last):
  File "/Users/jrh630/anaconda3/envs/napari-chatgpt/bin/napari", line 8, in <module>
    sys.exit(main())
  File "/Users/jrh630/anaconda3/envs/napari-chatgpt/lib/python3.9/site-packages/napari/__main__.py", line 554, in main
    _maybe_rerun_with_macos_fixes()
  File "/Users/jrh630/anaconda3/envs/napari-chatgpt/lib/python3.9/site-packages/napari/__main__.py", line 453, in _maybe_rerun_with_macos_fixes
    from qtpy import API_NAME
  File "/Users/jrh630/anaconda3/envs/napari-chatgpt/lib/python3.9/site-packages/qtpy/__init__.py", line 259, in <module>
    raise QtBindingsNotFoundError from None
qtpy.QtBindingsNotFoundError: No Qt bindings could be found
(napari-chatgpt) jrh630@SCI1012515 napari-chatgpt % git rev-parse FETCH_HEAD
FETCH_HEAD
fatal: ambiguous argument 'FETCH_HEAD': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'

I'm on a macbook pro m1 chip, macos Ventura 13.5, and today's version napari-chatgpt:

(napari-chatgpt) jrh630@SCI1012515 napari-chatgpt % git rev-parse FETCH_HEAD
734cf7d137a5e1f1ab8ff7ae586cee0ab739e43f

Napari-chatpgt runs with the default non-cloned version, without gpt4 unfortunately, see other issue-ticket.

Thanks, Jon

installation issues in napari 4.18

Hi,

trying to install napari-chatgpt via the install/uninstall plugin option from napari ui, the following error accrues:

Could not solve for environment specs
The following package could not be installed
└─ napari-chatgpt does not exist (perhaps a typo or a missing channel).

I was able to install the chatgpt plugin in napari 4.19 release.

please assist.

thanks,

  • Dean

issue with running Omega - missing packages

Hi! I followed the instructions to install into a new condo environment. I got stuck at the "solving environment" stage when I tried to install napari from conda-forge, so instead I used pip install "napari[all]". Downstream I used pip to install napari-chatgpt and when I tried to "start Omega" I got that I was missing the cryptography module. So I installed that and now I'm getting the following:
dlopen(/opt/anaconda3/envs/napari-chatgpt/lib/python3.9/site-packages/cryptography/hazmat/bindings/_openssl.abi3.so, 0x0002): Library not loaded: @rpath/libssl.1.1.dylib

Do you think this is an issue with the strategy I used to install napari? (I should try conda-forge again?) Or do you think something else might be going on?

API key docs

Hi Loic @royerloic ,

first of all congrats to this plugin! It looks great.

I'm having some issues in getting it to run. I signed up to the OpenAI API and created an API key. That key is a single long string. Omega asks me for the key and a password. I entered the name of the key and the long string as password. This seems to be wrong. I receive the following error and a browser opens like this:

image

Is there maybe a step I might have missed?

Any hint is welcome! Thanks!

Best,
Robert

(napari-chatgpt) C:\Users\rober>napari
|-> Starting Omega!
C:\Users\rober\miniconda3\envs\napari-chatgpt\lib\site-packages\napari\_qt\qt_event_loop.py:409: UserWarning: A QApplication is already running with 1 event loop. To enter *another* event loop, use `run(max_loop_level=2)`!
INFO:     Started server process [17864]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
ERROR:    [Errno 10048] error while attempting to bind on address ('0.0.0.0', 9000): only one usage of each socket address (protocol/network address/port) is normally permitted
INFO:     Waiting for application shutdown.
INFO:     Application shutdown complete.
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\uvicorn\server.py:161, in Server.startup(self=<uvicorn.server.Server object>, sockets=None)
    160 try:
--> 161     server = await loop.create_server(
        loop = <_WindowsSelectorEventLoop running=False closed=True debug=False>
        config = <uvicorn.config.Config object at 0x00000204C7FBFB80>
        config.host = '0.0.0.0'
        config.port = 9000
        config.ssl = None
        config.backlog = 2048
    162         create_protocol,
    163         host=config.host,
    164         port=config.port,
    165         ssl=config.ssl,
    166         backlog=config.backlog,
    167     )
    168 except OSError as exc:

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\base_events.py:1506, in BaseEventLoop.create_server(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>, protocol_factory=<function Server.startup.<locals>.create_protocol>, host='0.0.0.0', port=9000, family=<AddressFamily.AF_UNSPEC: 0>, flags=<AddressInfo.AI_PASSIVE: 1>, sock=<socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>, backlog=2048, ssl=None, reuse_address=False, reuse_port=None, ssl_handshake_timeout=None, start_serving=True)
   1505     except OSError as err:
-> 1506         raise OSError(err.errno, 'error while attempting '
        sa = ('0.0.0.0', 9000)
   1507                       'to bind on address %r: %s'
   1508                       % (sa, err.strerror.lower())) from None
   1509 completed = True

OSError: [Errno 10048] error while attempting to bind on address ('0.0.0.0', 9000): only one usage of each socket address (protocol/network address/port) is normally permitted

During handling of the above exception, another exception occurred:

SystemExit                                Traceback (most recent call last)
File ~\miniconda3\envs\napari-chatgpt\lib\threading.py:980, in Thread._bootstrap_inner(self=<Thread(Thread-2, started 12736)>)
    977     _sys.setprofile(_profile_hook)
    979 try:
--> 980     self.run()
        self = <Thread(Thread-2, started 12736)>
    981 except:
    982     self._invoke_excepthook(self)

File ~\miniconda3\envs\napari-chatgpt\lib\threading.py:917, in Thread.run(self=<Thread(Thread-2, started 12736)>)
    915 try:
    916     if self._target:
--> 917         self._target(*self._args, **self._kwargs)
        self = <Thread(Thread-2, started 12736)>
    918 finally:
    919     # Avoid a refcycle if the thread is running a function with
    920     # an argument that has a member that points to the thread.
    921     del self._target, self._args, self._kwargs

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\napari_chatgpt\chat_server\chat_server.py:170, in start_chat_server.<locals>.server_thread_function()
    168 def server_thread_function():
    169     # Start Chat server:
--> 170     chat_server.run()
        chat_server = <napari_chatgpt.chat_server.chat_server.NapariChatServer object at 0x00000204C7FCDCA0>

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\napari_chatgpt\chat_server\chat_server.py:150, in NapariChatServer.run(self=<napari_chatgpt.chat_server.chat_server.NapariChatServer object>)
    148 def run(self):
    149     import uvicorn
--> 150     uvicorn.run(self.app, host="0.0.0.0", port=9000)
        self.app = <fastapi.applications.FastAPI object at 0x00000204C7FCDD60>
        self = <napari_chatgpt.chat_server.chat_server.NapariChatServer object at 0x00000204C7FCDCA0>

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\uvicorn\main.py:578, in run(app=<fastapi.applications.FastAPI object>, host='0.0.0.0', port=9000, uds=None, fd=None, loop='auto', http='auto', ws='auto', ws_max_size=16777216, ws_ping_interval=20.0, ws_ping_timeout=20.0, ws_per_message_deflate=True, lifespan='auto', interface='auto', reload=False, reload_dirs=None, reload_includes=None, reload_excludes=None, reload_delay=0.25, workers=None, env_file=None, log_config={'disable_existing_loggers': False, 'formatters': {'access': {'()': 'uvicorn.logging.AccessFormatter', 'fmt': '%(levelprefix)s %(client_addr)s - "%(request_line)s" %(status_code)s'}, 'default': {'()': 'uvicorn.logging.DefaultFormatter', 'fmt': '%(levelprefix)s %(message)s', 'use_colors': None}}, 'handlers': {'access': {'class': 'logging.StreamHandler', 'formatter': 'access', 'stream': 'ext://sys.stdout'}, 'default': {'class': 'logging.StreamHandler', 'formatter': 'default', 'stream': 'ext://sys.stderr'}}, 'loggers': {'uvicorn': {'handlers': ['default'], 'level': 'INFO', 'propagate': False}, 'uvicorn.access': {'handlers': ['access'], 'level': 'INFO', 'propagate': False}, 'uvicorn.error': {'level': 'INFO'}}, 'version': 1}, log_level=None, access_log=True, proxy_headers=True, server_header=True, date_header=True, forwarded_allow_ips=None, root_path='', limit_concurrency=None, backlog=2048, limit_max_requests=None, timeout_keep_alive=5, timeout_graceful_shutdown=None, ssl_keyfile=None, ssl_certfile=None, ssl_keyfile_password=None, ssl_version=<_SSLMethod.PROTOCOL_TLS_SERVER: 17>, ssl_cert_reqs=<VerifyMode.CERT_NONE: 0>, ssl_ca_certs=None, ssl_ciphers='TLSv1', headers=None, use_colors=None, app_dir=None, factory=False, h11_max_incomplete_event_size=None)
    576     Multiprocess(config, target=server.run, sockets=[sock]).run()
    577 else:
--> 578     server.run()
        server = <uvicorn.server.Server object at 0x00000204C8099D30>
    579 if config.uds and os.path.exists(config.uds):
    580     os.remove(config.uds)  # pragma: py-win32

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\uvicorn\server.py:61, in Server.run(self=<uvicorn.server.Server object>, sockets=None)
     59 def run(self, sockets: Optional[List[socket.socket]] = None) -> None:
     60     self.config.setup_event_loop()
---> 61     return asyncio.run(self.serve(sockets=sockets))
        self = <uvicorn.server.Server object at 0x00000204C8099D30>
        sockets = None

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\runners.py:44, in run(main=<coroutine object Server.serve>, debug=None)
     42     if debug is not None:
     43         loop.set_debug(debug)
---> 44     return loop.run_until_complete(main)
        loop = <_WindowsSelectorEventLoop running=False closed=True debug=False>
        main = <coroutine object Server.serve at 0x00000204C8072EC0>
     45 finally:
     46     try:

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\base_events.py:634, in BaseEventLoop.run_until_complete(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>, future=<Task finished name='Task-1' coro=<Server.serve(...es\uvicorn\server.py:63> exception=SystemExit(1)>)
    632 future.add_done_callback(_run_until_complete_cb)
    633 try:
--> 634     self.run_forever()
        self = <_WindowsSelectorEventLoop running=False closed=True debug=False>
    635 except:
    636     if new_task and future.done() and not future.cancelled():
    637         # The coroutine raised a BaseException. Consume the exception
    638         # to not log a warning, the caller doesn't have access to the
    639         # local task.

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\base_events.py:601, in BaseEventLoop.run_forever(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>)
    599 events._set_running_loop(self)
    600 while True:
--> 601     self._run_once()
        self = <_WindowsSelectorEventLoop running=False closed=True debug=False>
    602     if self._stopping:
    603         break

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\base_events.py:1905, in BaseEventLoop._run_once(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>)
   1903             self._current_handle = None
   1904     else:
-> 1905         handle._run()
        handle = <Handle <TaskWakeupMethWrapper object at 0x00000204CA065610>(<Future finished result=True>)>
   1906 handle = None

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\events.py:80, in Handle._run(self=<Handle <TaskWakeupMethWrapper object at 0x00000204CA065610>(<Future finished result=True>)>)
     78 def _run(self):
     79     try:
---> 80         self._context.run(self._callback, *self._args)
        self = <Handle <TaskWakeupMethWrapper object at 0x00000204CA065610>(<Future finished result=True>)>
        self._callback = <TaskWakeupMethWrapper object at 0x00000204CA065610>
        self._context = <Context object at 0x00000204C80A1C40>
        self._args = (<Future finished result=True>,)
     81     except (SystemExit, KeyboardInterrupt):
     82         raise

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\uvicorn\server.py:78, in Server.serve(self=<uvicorn.server.Server object>, sockets=None)
     75 color_message = "Started server process [" + click.style("%d", fg="cyan") + "]"
     76 logger.info(message, process_id, extra={"color_message": color_message})
---> 78 await self.startup(sockets=sockets)
        self = <uvicorn.server.Server object at 0x00000204C8099D30>
        sockets = None
     79 if self.should_exit:
     80     return

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\uvicorn\server.py:171, in Server.startup(self=<uvicorn.server.Server object>, sockets=None)
    169     logger.error(exc)
    170     await self.lifespan.shutdown()
--> 171     sys.exit(1)
    173 assert server.sockets is not None
    174 listeners = server.sockets

SystemExit: 1

Can I reset my OpenAI password?

Hi, I cannot remember my OpenAI password that I use to start Omega. Can I reset this password or delete the .json file that stores this password? I even tried to install Omega on a new env but it keeps asking for the password rather than setting up a new OpenAI key. Any help would be appreciated! Thanks.

authentic failed for omega

Screenshot 2023-05-25 at 10 04 41 PM my question is I had already have log in to Omega and segmentation successfully , but after that I had uninstalled packages and reinstalled Oemga, at this time, I delete the original key and set the new key Screenshot 2023-05-25 at 10 19 31 PM it directly tell me to enter password , do not insert the openai key Screenshot 2023-05-25 at 10 20 16 PM the openai key I write like that, but is also failed, why?

Does Omega support Azure OpenAI GPT4.0 API key?

I attempted to install Omega using an Azure OpenAI GPT4.0 API key, and Omega did not recognize key. Does Omega work with Azure OpenAI keys? When I deleted the key and the uninstalled and reinstalled napari and napari-chatgpt directly from my terminal (running MacOS on an M1 chip), to see if I could use one of my personal OpenAI API keys, I am no longer getting the prompt to enter an API key when launching Omega. Any suggestions for troubleshooting would be greatly appreciated. Really, interested to try out Napari with Omega for some of IF imaging data. Thank you in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.