Giter VIP home page Giter VIP logo

oterm's Introduction

oterm

the text-based terminal client for Ollama.

Features

  • intuitive and simple terminal UI, no need to run servers, frontends, just type oterm in your terminal.
  • multiple persistent chat sessions, stored together with the context embeddings and system prompt customizations in sqlite.
  • can use any of the models you have pulled in Ollama, or your own custom models.
  • allows for easy customization of the model's system prompt and parameters.

Installation

Using brew for MacOS:

brew tap ggozad/formulas
brew install ggozad/formulas/oterm

Using pip:

pip install oterm

Using

In order to use oterm you will need to have the Ollama server running. By default it expects to find the Ollama API running on http://0.0.0.0:11434. If you are running Ollama inside docker or on a different host/port, use the OLLAMA_HOST environment variable to customize the host/port. Alternatively you can use OLLAMA_URL to specify the full http(s) url. Setting OTERM_VERIFY_SSL to False will disable SSL verification.

OLLAMA_URL=http://host:port/api

The following keyboard shortcuts are supported:

  • ^ Ctrl+N - create a new chat session

  • ^ Ctrl+E - edit the chat session (change template, system prompt or format)

  • ^ Ctrl+R - rename the current chat session

  • ^ Ctrl+S - export the current chat session as markdown

  • ^ Ctrl+X - delete the current chat session

  • ^ Ctrl+T - toggle between dark/light theme

  • ^ Ctrl+Q - quit

  • ^ Ctrl+L - switch to multiline input mode

  • ^ Ctrl+P - select an image to include with the next message

  • - navigate through history of previous prompts

  • ^ Ctrl+Tab - open the next chat

  • ^ Ctrl+Shift+Tab - open the previous chat

In multiline mode, you can press Enter to send the message, or Shift+Enter to add a new line at the cursor.

While Ollama is inferring the next message, you can press Esc to cancel the inference.

Note that some of the shortcuts may not work in a certain context, for example pressing while the prompt is in multi-line mode.

Copy / Paste

It is difficult to properly support copy/paste in terminal applications. You can copy blocks to your clipboard as such:

  • clicking a message will copy it to the clipboard.
  • clicking a code block will only copy the code block to the clipboard.

For most terminals there exists a key modifier you can use to click and drag to manually select text. For example:

  • iTerm Option key.
  • Gnome Terminal Shift key.
  • Windows Terminal Shift key.

Customizing models

When creating a new chat, you may not only select the model, but also customize the the system instruction as well as the parameters (such as context length, seed, temperature etc) passed to the model. For a list of all supported parameters refer to the Ollama documentation. Checking the JSON output checkbox will force the model to reply in JSON format. Please note that oterm will not (yet) pull models for you, use ollama to do that. All the models you have pulled or created will be available to oterm.

You can also "edit" the chat to change the system prompt, parameters or format. Note, that the model cannot be changed once the chat has started. In addition whatever "context" the chat had (an embedding of the previous messages) will be kept.

Chat session storage

All your chat sessions are stored locally in a sqlite database. You can customize the directory where the database is stored by setting the OTERM_DATA_DIR environment variable.

You can find the location of the database by running oterm --db.

Screenshots

Chat Model selection Image selection

License

This project is licensed under the MIT License.

oterm's People

Contributors

bbatsell avatar denehoffman avatar dependabot[bot] avatar ggozad avatar habaneraa avatar huynle avatar lainedfles avatar perongh avatar suhr avatar yilmaz08 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oterm's Issues

Module 'httpcore' has no attribute 'Request'

I installed oterm via pip and got this error.

Traceback (most recent call last):
  File "/usr/local/bin/oterm", line 5, in <module>
    from oterm.cli.oterm import cli
  File "/usr/local/lib/python3.10/dist-packages/oterm/cli/oterm.py", line 6, in <module>
    from oterm.app.oterm import app
  File "/usr/local/lib/python3.10/dist-packages/oterm/app/oterm.py", line 7, in <module>
    from oterm.app.chat_edit import ChatEdit
  File "/usr/local/lib/python3.10/dist-packages/oterm/app/chat_edit.py", line 14, in <module>
    from oterm.ollama import OllamaAPI
  File "/usr/local/lib/python3.10/dist-packages/oterm/ollama.py", line 4, in <module>
    import httpx
  File "/usr/local/lib/python3.10/dist-packages/httpx/__init__.py", line 48, in <module>
    from ._main import main
  File "/usr/local/lib/python3.10/dist-packages/httpx/_main.py", line 111, in <module>
    def format_request_headers(request: httpcore.Request, http2: bool = False) -> str:
AttributeError: module 'httpcore' has no attribute 'Request'

Suggestion: export option

Hello,

It would be nice to be able to export the current chat in, for example, markdown format.

P.S.: Very nice app, thank you! :)

TypeError: 'NoneType' object is not iterable

Hi. I have ollama installed and running via brew services start ollama. On each launch of oterm, I see the following:

❯ oterm
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/homebrew/Cellar/oterm/0.1.1/libexec/lib/python3.11/site-packages/oterm/app/model_selection. │
│ py:62 in on_mount                                                                                │
│                                                                                                  │
│   59 │                                                                                           │
│   60 │   async def on_mount(self) -> None:                                                       │
│   61 │   │   self.models = await self.api.get_models()                                           │
│ ❱ 62 │   │   models = [model["name"] for model in self.models]                                   │
│   63 │   │   option_list = self.query_one("#model-select", OptionList)                           │
│   64 │   │   option_list.clear_options()                                                         │
│   65 │   │   for model in models:                                                                │
│                                                                                                  │
│ ╭──────── locals ─────────╮                                                                      │
│ │ self = ModelSelection() │                                                                      │
│ ╰─────────────────────────╯                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: 'NoneType' object is not iterable

Copying text output

Hello, I am trying to copy and paste from oterm and I seem to be unable to do do any selections at all in the first place.
Is there something I am missing or is this not a current feature

How can I trigger shortcuts?

image

How can I trigger these shortcuts? I am using Mac and when I type Shift + N, it is just treated as the chat input. Ctrl + Shift + N or CMD + Shift + N doesn't work as well.

windows: strange crash when I select a model.

Windows: strange crash when I select a model.
Then once restarted oterm everything works correctly, including the window with the selected LLM is already ready to work.

image

`╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\Users\noki\AppData\Local\Programs\Python\Python311\Lib\site-packages\oterm\app\oterm.py:89 in on_tab_activated │
│ │
│ 86 │ @on(TabbedContent.TabActivated) │
│ 87 │ async def on_tab_activated(self, event: TabbedContent.TabActivated) -> None: │
│ 88 │ │ container = event.pane.query_one(ChatContainer) │
│ ❱ 89 │ │ await container.load_messages() │
│ 90 │ │
│ 91 │ def compose(self) -> ComposeResult: │
│ 92 │ │ yield Header() │
│ │
│ ╭───────────────────────── locals ─────────────────────────╮ │
│ │ container = ChatContainer() │ │
│ │ event = TabActivated( │ │
│ │ │ TabbedContent(id='tabs'), │ │
│ │ │ ContentTab(id='--content-tab-chat-53'), │ │
│ │ │ TabPane(id='chat-53') │ │
│ │ ) │ │
│ │ self = OTerm(title='oTerm', classes={'-dark-mode'}) │ │
│ ╰──────────────────────────────────────────────────────────╯ │
│ │
│ C:\Users\noki\AppData\Local\Programs\Python\Python311\Lib\site-packages\oterm\app\widgets\chat.py:87 in │
│ load_messages │
│ │
│ 84 │ async def load_messages(self) -> None: │
│ 85 │ │ if self.loaded: │
│ 86 │ │ │ return │
│ ❱ 87 │ │ message_container = self.query_one("#messageContainer") │
│ 88 │ │ for author, message in self.messages: │
│ 89 │ │ │ chat_item = ChatItem() │
│ 90 │ │ │ chat_item.text = message │
│ │
│ ╭──────── locals ────────╮ │
│ │ self = ChatContainer() │ │
│ ╰────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
NoMatches: No nodes match on ChatContainer()

NOTE: 1 of 2 errors shown. Run with textual run --dev to see all errors.`

Asian language response instead of English

I queried “What was the color of George Washington's white horse?”, and after a few minutes got a response in an unknown language:

oterm3

Running the same query with samantha-mistral did not have this problem:

oterm4

Support the original OLLAMA_HOST env variable please!

When using ollama itself it uses OLLAMA_HOST as the env variable for the ollama server to use. This can be either just an IP or a port or ip+port (e.g. OLLAMA_HOST=192.168.1.10, OLLAMA_HOST=192.168.1.11:11434).

It would be nice if oterm would just pick up this env variable which is most likely already set by ollama users that run servers on other systems or ports. Basically making it behaving the same as the ollama client command.

AUR/Nix package

it'd be nice, since ollama is present on the AUR, to make an oterm AUR package. From what I understand, it shouldn't be too hard. In the future maybe nix would be nice, too?

Crashing when try rename chat

Stacktrace:

│   170 │   │   │   chat = await chat_queries.get_chat(connection, id=id)  # type: ignore                                                                                                                                                                                                          │
│   171 │   │   │   if chat:                                                                                                                                                                                                                                                                       │
│   172 │   │   │   │   chat = chat[0]                                                                                                                                                                                                                                                             │
│ ❱ 173 │   │   │   │   id, name, model, context, template, system = chat                                                                                                                                                                                                                          │
│   174 │   │   │   │   context = json.loads(context)                                                                                                                                                                                                                                              │
│   175 │   │   │   │   return id, name, model, context, template, system                                                                                                                                                                                                                          │
│   176                                                                                                                                                                                                                                                                                            │
│                                                                                                                                                                                                                                                                                                  │
│ ╭──────────────────────────────────────────────────────────────────────────────────────────────────── locals ─────────────────────────────────────────────────────────────────────────────────────────────────────╮                                                                              │
│ │       chat = (5, 'chat #3 - less_ctx_dsc_6_7b_instr_q8:latest', 'less_ctx_dsc_6_7b_instr_q8:latest', '[2042, 417, 274, 20926, 14244, 20391, 11, 26696, 254, 20676, 30742, 339, 8589, 2'+8009, None, None, None) │                                                                              │
│ │ connection = <Connection(Thread-9, started 6206042112)>                                                                                                                                                         │                                                                              │
│ │         id = 5                                                                                                                                                                                                  │                                                                              │
│ │       self = <oterm.store.store.Store object at 0x1039f04d0>                                                                                                                                                    │                                                                              │
│ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯                                                                              │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: too many values to unpack (expected 6)

Retry, Continue, Reword, Pop, Edit, Fake and Dictate last answer.

It would be nice to be able to act on the last answer or force the AI into some schema:

Retry - Send the same prompt again and replace the answer (in one step)
Continue - Send the last prompt + the current answer to let the llm continue the answer.
Reword - Delete the last answer and the last promt, then set the last prompt into the entry field so it can be edited.
Pop - (alternative) delete the last answer/prompt (having /pop 3 to remove the last x turns would be even nicer)
Edit - Copy the answer of the llm into the input and let us modify it, to manipulate the AI
Fake - Let us write an answer and make the chat history look like the model did answer this.
Dictate - Let us write a Prompt and then after some special character, start the answer of the model so it will continue with those words.

These are similar to what textgen-gui and memgpt offer (where I implemented some of that for memgpt)

oterm not running: FileNotFoundError (config.json)

Hi,

I installed via 'pip install oterm' on both Ubuntu and Debian in WSL on command 'oterm' I get:

(env) xyz@abc:~$ oterm
Traceback (most recent call last):
File "/home/xyz/env/lib/python3.11/site-packages/oterm/config.py", line 79, in init
with open(self._path, "r") as f:
^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/home/xyz/.local/share/oterm/config.json'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/xyz/env/bin/oterm", line 5, in
from oterm.cli.oterm import cli
File "/home/xyz/env/lib/python3.11/site-packages/oterm/cli/oterm.py", line 6, in
from oterm.app.oterm import app
File "/home/xyz/env/lib/python3.11/site-packages/oterm/app/oterm.py", line 6, in
from oterm.app.model_selection import ModelSelection
File "/home/xyz/env/lib/python3.11/site-packages/oterm/app/model_selection.py", line 13, in
from oterm.ollama import OllamaAPI
File "/home/xyz/env/lib/python3.11/site-packages/oterm/ollama.py", line 6, in
from oterm.config import envConfig
File "/home/xyz/env/lib/python3.11/site-packages/oterm/config.py", line 99, in
appConfig = AppConfig()
^^^^^^^^^^^
File "/home/xyz/env/lib/python3.11/site-packages/oterm/config.py", line 83, in init
self.save()
File "/home/xyz/env/lib/python3.11/site-packages/oterm/config.py", line 93, in save
with open(self._path, "w") as f:
^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/home/xyz/.local/share/oterm/config.json'

crash on Mac M2 first run

I installed via brew which worked, then typed oterm

% oterm                   
╭────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /opt/homebrew/Cellar/oterm/0.1.4/libexec/lib/python3.11/site-packages/oterm/app/model_selection.py:102 in on_mount                                                                                                                            │
│                                                                                                                                                                                                                                               │
│    99 │   │   self.models = await self.api.get_models()                                        ╭──────────────────────────────────── locals ─────────────────────────────────────╮                                                            │
│   100 │   │   models = [model["name"] for model in self.models]                                │  model = 'codellama:7b-instruct'                                                │                                                            │
│   101 │   │   for model in models:                                                             │ models = ['codellama:7b-instruct', 'llama2:latest', 'llama2-uncensored:latest'] │                                                            │
│ ❱ 102 │   │   │   info = await self.api.get_model_info(model)                                  │   self = ModelSelection()                                                       │                                                            │
│   103 │   │   │   for key in ["modelfile", "license"]:                                         ╰─────────────────────────────────────────────────────────────────────────────────╯                                                            │
│   104 │   │   │   │   if key in info.keys():                                                                                                                                                                                                  │
│   105 │   │   │   │   │   del info[key]                                                                                                                                                                                                       │
│                                                                                                                                                                                                                                               │
│ /opt/homebrew/Cellar/oterm/0.1.4/libexec/lib/python3.11/site-packages/oterm/ollama.py:84 in get_model_info                                                                                                                                    │
│                                                                                                                                                                                                                                               │
│   81 │   async def get_model_info(self, model: str) -> dict[str, Any]:                        ╭───────────────────────── locals ──────────────────────────╮                                                                                   │
│   82 │   │   client = httpx.AsyncClient()                                                     │   client = <httpx.AsyncClient object at 0x1050bc650>      │                                                                                   │
│   83 │   │   response = await client.post(f"{Config.OLLAMA_URL}/show", json={"name": model})  │    model = 'codellama:7b-instruct'                        │                                                                                   │
│ ❱ 84 │   │   if response.json().get("error"):                                                 │ response = <Response [404 Not Found]>                     │                                                                                   │
│   85 │   │   │   raise OllamaError(response.json()["error"])                                  │     self = <oterm.ollama.OllamaAPI object at 0x1047384d0> │                                                                                   │
│   86 │   │   return response.json()                                                           ╰───────────────────────────────────────────────────────────╯                                                                                   │
│   87                                                                                                                                                                                                                                          │
│                                                                                                                                                                                                                                               │
│ /opt/homebrew/Cellar/oterm/0.1.4/libexec/lib/python3.11/site-packages/httpx/_models.py:755 in json                                                                                                                                            │
│                                                                                                                                                                                                                                               │
│    752 │   │   if self.charset_encoding is None and self.content and len(self.content) > 3:     ╭─────────────── locals ────────────────╮                                                                                                     │
│    753 │   │   │   encoding = guess_json_utf(self.content)                                      │ encoding = 'utf-8'                    │                                                                                                     │
│    754 │   │   │   if encoding is not None:                                                     │   kwargs = {}                         │                                                                                                     │
│ ❱  755 │   │   │   │   return jsonlib.loads(self.content.decode(encoding), **kwargs)            │     self = <Response [404 Not Found]> │                                                                                                     │
│    756 │   │   return jsonlib.loads(self.text, **kwargs)                                        ╰───────────────────────────────────────╯                                                                                                     │
│    757 │                                                                                                                                                                                                                                      │
│    758 │   @property                                                                                                                                                                                                                          │
│                                                                                                                                                                                                                                               │
│ /opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py:346 in loads                                                                                                                │
│                                                                                                                                                                                                                                               │
│   343 │   if (cls is None and object_hook is None and                                          ╭───────────────── locals ─────────────────╮                                                                                                   │
│   344 │   │   │   parse_int is None and parse_float is None and                                │               cls = None                 │                                                                                                   │
│   345 │   │   │   parse_constant is None and object_pairs_hook is None and not kw):            │                kw = {}                   │                                                                                                   │
│ ❱ 346 │   │   return _default_decoder.decode(s)                                                │       object_hook = None                 │                                                                                                   │
│   347 │   if cls is None:                                                                      │ object_pairs_hook = None                 │                                                                                                   │
│   348 │   │   cls = JSONDecoder                                                                │    parse_constant = None                 │                                                                                                   │
│   349 │   if object_hook is not None:                                                          │       parse_float = None                 │                                                                                                   │
│                                                                                                │         parse_int = None                 │                                                                                                   │
│                                                                                                │                 s = '404 page not found' │                                                                                                   │
│                                                                                                ╰──────────────────────────────────────────╯                                                                                                   │
│                                                                                                                                                                                                                                               │
│ /opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py:340 in decode                                                                                                                │
│                                                                                                                                                                                                                                               │
│   337 │   │   obj, end = self.raw_decode(s, idx=_w(s, 0).end())                                ╭────────────────────────────── locals ──────────────────────────────╮                                                                         │
│   338 │   │   end = _w(s, end).end()                                                           │   _w = <built-in method match of re.Pattern object at 0x1046cb6b0> │                                                                         │
│   339 │   │   if end != len(s):                                                                │  end = 4                                                           │                                                                         │
│ ❱ 340 │   │   │   raise JSONDecodeError("Extra data", s, end)                                  │  obj = 404                                                         │                                                                         │
│   341 │   │   return obj                                                                       │    s = '404 page not found'                                        │                                                                         │
│   342 │                                                                                        │ self = <json.decoder.JSONDecoder object at 0x10471b350>            │                                                                         │
│   343 │   def raw_decode(self, s, idx=0):                                                      ╰────────────────────────────────────────────────────────────────────╯                                                                         │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
JSONDecodeError: Extra data: line 1 column 5 (char 4)

I have the models

% ollama list
NAME                    	SIZE  	MODIFIED     
codellama:7b-instruct   	3.8 GB	7 weeks ago 	
llama2:latest           	3.8 GB	2 months ago	
llama2-uncensored:latest	3.8 GB	2 months ago

which oterm seems to pick up OK. But after that I don't know.

iTerm2 Problems on OSX

I tried running this on iTerm2 in OSX and ran into some problems:

  • All new chats are "JSON output" and I can't change the field
  • Using on "chat" will tab into the Template and using it again will only add tabs inside the template widget. I need to use to reach other widgets.
  • The format is always "JSON output" and I can't find a way to disable it. I thought of using just enter/return, but it does nothing.

Another "problem": Adding export OLLAMA_URL=http://$OLLAMA_HOST:11434/api is trivial but as all my other ollama based tools are just happy with using $OLLAMA_HOST I wished that oterm also would just use this or at least if there is no OLLAMA_URL in the environment.

OLLAMA_HOST: 0.0.0.0:11434 is not working on windows 11 and perhaps a related bug changing local IP to the same default IP by Ollama.

OLLAMA_HOST "0.0.0.0:11434" is not working on windows 11

by changing OLLAMA_HOST in the windows path in 127.0.0.1:11434 (defaut ollama ip on windows) it conflicts with the Ollama server itself or with other ollama related apps, They simply don't work. Then I change the same local ip inside oterm config.y and everything works fine, except one thing that happens, when I select CTRL-N and select an AI model oterm crashes and gives me this screen..

image

Then when I reload oterm, and the new chat is started with the selected model and everything works correctly.. I'm not sure if the change of the local IP address is related to this "bug"

Multiline paste doesn't work with non-expanded chat input

When the chat input is collapsed, multiline paste only paste in the first line
image

but when I expanded it before paste, it works perfectly
image

Expected behavior: When paste multiple lines, the input field should receive the whole clipboard entry and the chat should automatically expand to accomodate long text

Brew Installation Error

Hi Oterm maintainer!

Thank you for the great software. I am trying to install this through brew but got an error

Here is an error message when I runbrew install ggozad/formulas/oterm

error: command '/usr/local/Homebrew/Library/Homebrew/shims/mac/super/clang' failed with exit code 1
  error: subprocess-exited-with-error
  
  × Building wheel for pillow (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  full command: /usr/local/Cellar/oterm/0.2.3/libexec/bin/python /usr/local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /private/tmp/tmp_xn4z4iz
  cwd: /private/tmp/oterm--pillow-20240304-83518-iyy3ms/pillow-10.2.0
  Building wheel for pillow (pyproject.toml): finished with status 'error'
  ERROR: Failed building wheel for pillow
Failed to build pillow
ERROR: Could not build wheels for pillow, which is required to install pyproject.toml-based projects

Please help!

Fast new chat creation

  1. In the new chat dialog, by default select the model that was previously selected
  2. Pressing Enter creates a new chat

Configure markdown render size.

When the query returns a markdown section, that gets rendered as a sort of a sub-window. Is there a way to configure how large this section is maximum? Typically I'd like it to be larger than what the default seems to be.

a suggestion on historical chat selection

Currently to look for earlier chats one can only select from the horizontal tabs above, which is difficult if one have a lot of chats.
I wonder if it is possible to add a page which displays the history of chats and the chat tabs are displayed as vertical tabs (just like in ChatGPT), and I think it can make looking for history a lot more convenient.

Thank you for reading and I hope to receive your comments :)

Idea suggestion: Press UP/DOWN to scroll through previous requests sent.

I have found myself multiple times pressing 'Up' expecting to be able to get the previous request I sent appear in the request box so I can make a small edit and resubmit.

It would be an excellent improvement to be able to scroll through previous requests by pressing UP/DOWN. This is a common UI feature in tools such as Vim and many chat programs.

Without this facility it's often tiring to retype in a previous request in an otherwise excellent tool.

Scrolling Behavior Issue on macOS

I am experiencing an issue with the scrolling behavior in oTerm where the content appears to bounce up and down when I try to scroll using the trackpad on my Macbook Air. Additionally, the scrollbar will seem to get stuck and then bounce up and down when you try and move it.

I've attached a screen recording that demonstrates the bouncing scrolling effect:
CleanShot 2023-12-26 at 19 25 42

Suggestion: Scroll history of current chat via keyboard

This is a great UI, it's almost 100% what I want!

It's perfect for my requirements, but for one thing. It seems to be necessary to use the mouse to scroll up and down through a chat.

I'd like to be able to use this program through the full-screen TTY Linux session (via Ctrl+Alt+F5) where using the mouse isn't an option.

Would it be possible to make this 100% keyboard-only compatible, by perhaps allowing the use of the PageUp and PageDown keys to scroll through the chat history?

Thanks, and I hope you give this feature a thought!

MarkupError did close the app instead of being handled inside the chat

I had a casual conversation with llama2 7b when I got this crash.
I mean I'm fairly certain it would be best to provide error inside the chat window instead of completely closing the app with trace (unless in some dev mode I suppose).

╭───────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────╮
│ /usr/local/Cellar/oterm/0.1.10/libexec/lib/python3.11/site-packages/oterm/app/chat.py:87 in on_submit                                                                                                       │
│                                                                                                                                                                                                             │
│    84 │   │   response = ""                                                                                                                                                                                 │
│    85 │   │   async for text in self.ollama.stream(message):                                                                                                                                                │
│    86 │   │   │   response = text                                                                                                                                                                           │
│ ❱  87 │   │   │   chat_item.text = text                                                                                                                                                                     │
│    88 │   │   │   message_container.scroll_end()                                                                                                                                                            │
│    89 │   │   self.messages.append((Author.OLLAMA, response))                                                                                                                                               │
│    90 │   │   loading.remove()                                                                                                                                                                              │
│                                                                                                                                                                                                             │
│ ╭────────────────────────────────────────────────── locals ──────────────────────────────────────────────────╮                                                                                              │
│ │         chat_item = ChatItem()                                                                             │                                                                                              │
│ │             event = Submitted()                                                                            │                                                                                              │
│ │             input = FlexibleInput(id='prompt')                                                             │                                                                                              │
│ │           loading = LoadingIndicator()                                                                     │                                                                                              │
│ │           message = 'But I mean Jesus is kinda chill guy'                                                  │                                                                                              │
│ │ message_container = Vertical(id='messageContainer')                                                        │                                                                                              │
│ │          response = "I understand that you may have a certain perception of Jesus, but it's important"+491 │                                                                                              │
│ │              self = ChatContainer()                                                                        │                                                                                              │
│ │              text = "I understand that you may have a certain perception of Jesus, but it's important"+491 │                                                                                              │
│ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯                                                                                              │
│                                                                                                                                                                                                             │
│ /usr/local/Cellar/oterm/0.1.10/libexec/lib/python3.11/site-packages/oterm/app/chat.py:132 in watch_text                                                                                                     │
│                                                                                                                                                                                                             │
│   129 │   │   try:                                                                             ╭──────────────────────────────────────────── locals ─────────────────────────────────────────────╮          │
│   130 │   │   │   widget = self.query_one(".text", Static)                                     │   self = ChatItem()                                                                             │          │
│   131 │   │   │   if widget:                                                                   │   text = "I understand that you may have a certain perception of Jesus, but it's important"+491 │          │
│ ❱ 132 │   │   │   │   widget.update(text)                                                      │ widget = Static()                                                                               │          │
│   133 │   │   except NoMatches:                                                                ╰─────────────────────────────────────────────────────────────────────────────────────────────────╯          │
│   134 │   │   │   pass                                                                                                                                                                                      │
│   135                                                                                                                                                                                                       │
│                                                                                                                                                                                                             │
│ /usr/local/Cellar/oterm/0.1.10/libexec/lib/python3.11/site-packages/textual/widgets/_static.py:97 in update                                                                                                 │
│                                                                                                                                                                                                             │
│   94 │   │   │   renderable: A new rich renderable. Defaults to empty renderable;             ╭────────────────────────────────────────────── locals ───────────────────────────────────────────────╮       │
│   95 │   │   """                                                                              │ renderable = "I understand that you may have a certain perception of Jesus, but it's important"+491 │       │
│   96 │   │   _check_renderable(renderable)                                                    │       self = Static()                                                                               │       │
│ ❱ 97 │   │   self.renderable = renderable                                                     ╰─────────────────────────────────────────────────────────────────────────────────────────────────────╯       │
│   98 │   │   self.refresh(layout=True)                                                                                                                                                                      │
│   99                                                                                                                                                                                                        │
│                                                                                                                                                                                                             │
│ /usr/local/Cellar/oterm/0.1.10/libexec/lib/python3.11/site-packages/textual/widgets/_static.py:76 in renderable                                                                                             │
│                                                                                                                                                                                                             │
│   73 │   def renderable(self, renderable: RenderableType) -> None:                            ╭────────────────────────────────────────────── locals ───────────────────────────────────────────────╮       │
│   74 │   │   if isinstance(renderable, str):                                                  │ renderable = "I understand that you may have a certain perception of Jesus, but it's important"+491 │       │
│   75 │   │   │   if self.markup:                                                              │       self = Static()                                                                               │       │
│ ❱ 76 │   │   │   │   self._renderable = Text.from_markup(renderable)                          ╰─────────────────────────────────────────────────────────────────────────────────────────────────────╯       │
│   77 │   │   │   else:                                                                                                                                                                                      │
│   78 │   │   │   │   self._renderable = Text(renderable)                                                                                                                                                    │
│   79 │   │   else:                                                                                                                                                                                          │
│                                                                                                                                                                                                             │
│ /usr/local/Cellar/oterm/0.1.10/libexec/lib/python3.11/site-packages/rich/text.py:284 in from_markup                                                                                                         │
│                                                                                                                                                                                                             │
│ /usr/local/Cellar/oterm/0.1.10/libexec/lib/python3.11/site-packages/rich/markup.py:164 in render                                                                                                            │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
MarkupError: closing tag '[/INST]' at position 564 doesn't match any open tag

OperationalError: no such column: format

Hi,

First, I wanna say that this project is very promising, good job!

Now the issue, I just installed on a Mac M1 through home-brew and when I try to run it I get the error: OperationalError: no such column: format

Here's the stack trace:

Stracktrace
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /opt/homebrew/Cellar/oterm/0.1.12/libexec/lib/python3.11/site-packages/oterm │
│ /app/oterm.py:108 in on_mount                                                │
│                                                                              │
│   105 │                                                                      │
│   106 │   async def on_mount(self) -> None:                                  │
│   107 │   │   self.store = await Store.create()                              │
│ ❱ 108 │   │   saved_chats = await self.store.get_chats()  # type: ignore     │
│   109 │   │   if not saved_chats:                                            │
│   110 │   │   │   self.action_new_chat()                                     │
│   111 │   │   else:                                                          │
│                                                                              │
│ ╭────────────────────── locals ───────────────────────╮                      │
│ │ self = OTerm(title='oTerm', classes={'-dark-mode'}) │                      │
│ ╰─────────────────────────────────────────────────────╯                      │
│                                                                              │
│ /opt/homebrew/Cellar/oterm/0.1.12/libexec/lib/python3.11/site-packages/oterm │
│ /store/store.py:159 in get_chats                                             │
│                                                                              │
│   156 │   │   tuple[int, str, str, list[int], str | None, str | None, Litera │
│   157 │   ]:                                                                 │
│   158 │   │   async with aiosqlite.connect(self.db_path) as connection:      │
│ ❱ 159 │   │   │   chats = await chat_queries.get_chats(connection)  # type:  │
│   160 │   │   │   chats = [                                                  │
│   161 │   │   │   │   (id, name, model, json.loads(context), template, syste │
│   162 │   │   │   │   for id, name, model, context, template, system, format │
│                                                                              │
│ ╭─────────────────────────── locals ───────────────────────────╮             │
│ │ connection = <Connection(Thread-5, started 6214348800)>      │             │
│ │       self = <oterm.store.store.Store object at 0x106b87e10> │             │
│ ╰──────────────────────────────────────────────────────────────╯             │
│                                                                              │
│ /opt/homebrew/Cellar/oterm/0.1.12/libexec/lib/python3.11/site-packages/aiosq │
│ l/queries.py:98 in afn                                                       │
│                                                                              │
│    95                                                                        │
│    96 def _make_async_fn(fn: QueryFn) -> QueryFn:                            │
│    97 │   async def afn(self: Queries, conn, *args, **kwargs):               │
│ ❱  98 │   │   return await fn(self, conn, *args, **kwargs)                   │
│    99 │                                                                      │
│   100 │   return _query_fn(afn, fn.__name__, fn.__doc__, fn.sql, fn.operatio │
│   101                                                                        │
│                                                                              │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │   args = ()                                                              │ │
│ │   conn = <Connection(Thread-5, started 6214348800)>                      │ │
│ │     fn = <function _make_sync_fn.<locals>.fn at 0x106aa2c00>             │ │
│ │ kwargs = {}                                                              │ │
│ │   self = Queries(['delete_chat', 'delete_chat_cursor', 'get_chat',       │ │
│ │          'get_chat_cursor', 'get_chats', 'get_chats_cursor',             │ │
│ │          'get_messages', 'get_messages_cursor', 'rename_chat',           │ │
│ │          'rename_chat_cursor', 'save_chat', 'save_chat_cursor',          │ │
│ │          'save_context', 'save_context_cursor', 'save_message',          │ │
│ │          'save_message_cursor'])                                         │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│                                                                              │
│ /opt/homebrew/Cellar/oterm/0.1.12/libexec/lib/python3.11/site-packages/aiosq │
│ l/adapters/aiosqlite.py:24 in select                                         │
│                                                                              │
│   21 │   │   return sql                                                      │
│   22 │                                                                       │
│   23 │   async def select(self, conn, _query_name, sql, parameters, record_c │
│ ❱ 24 │   │   async with conn.execute(sql, parameters) as cur:                │
│   25 │   │   │   results = await cur.fetchall()                              │
│   26 │   │   │   if record_class is not None:                                │
│   27 │   │   │   │   column_names = [c[0] for c in cur.description]          │
│                                                                              │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │  _query_name = 'get_chats'                                               │ │
│ │         conn = <Connection(Thread-5, started 6214348800)>                │ │
│ │   parameters = ()                                                        │ │
│ │ record_class = None                                                      │ │
│ │         self = <aiosql.adapters.aiosqlite.AioSQLiteAdapter object at     │ │
│ │                0x1069d2cd0>                                              │ │
│ │          sql = 'SELECT id, name, model, context, template, system,       │ │
│ │                format FROM chat;'                                        │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│                                                                              │
│ /opt/homebrew/Cellar/oterm/0.1.12/libexec/lib/python3.11/site-packages/aiosq │
│ lite/context.py:39 in __aenter__                                             │
│                                                                              │
│   36 │   │   return self._coro.__await__()                                   │
│   37 │                                                                       │
│   38 │   async def __aenter__(self) -> _T:                                   │
│ ❱ 39 │   │   self._obj = await self._coro                                    │
│   40 │   │   return self._obj                                                │
│   41 │                                                                       │
│   42 │   async def __aexit__(self, exc_type, exc, tb) -> None:               │
│                                                                              │
│ ╭──────────────────────── locals ─────────────────────────╮                  │
│ │ self = <aiosqlite.context.Result object at 0x106b00b40> │                  │
│ ╰─────────────────────────────────────────────────────────╯                  │
│                                                                              │
│ /opt/homebrew/Cellar/oterm/0.1.12/libexec/lib/python3.11/site-packages/aiosq │
│ lite/core.py:190 in execute                                                  │
│                                                                              │
│   187 │   │   """Helper to create a cursor and execute the given query."""   │
│   188 │   │   if parameters is None:                                         │
│   189 │   │   │   parameters = []                                            │
│ ❱ 190 │   │   cursor = await self._execute(self._conn.execute, sql, paramete │
│   191 │   │   return Cursor(self, cursor)                                    │
│   192 │                                                                      │
│   193 │   @contextmanager                                                    │
│                                                                              │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │ parameters = ()                                                          │ │
│ │       self = <Connection(Thread-5, started 6214348800)>                  │ │
│ │        sql = 'SELECT id, name, model, context, template, system, format  │ │
│ │              FROM chat;'                                                 │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│                                                                              │
│ /opt/homebrew/Cellar/oterm/0.1.12/libexec/lib/python3.11/site-packages/aiosq │
│ lite/core.py:133 in _execute                                                 │
│                                                                              │
│   130 │   │                                                                  │
│   131 │   │   self._tx.put_nowait((future, function))                        │
│   132 │   │                                                                  │
│ ❱ 133 │   │   return await future                                            │
│   134 │                                                                      │
│   135 │   async def _connect(self) -> "Connection":                          │
│   136 │   │   """Connect to the actual sqlite database."""                   │
│                                                                              │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │     args = (                                                             │ │
│ │            │   'SELECT id, name, model, context, template, system,       │ │
│ │            format FROM chat;',                                           │ │
│ │            │   ()                                                        │ │
│ │            )                                                             │ │
│ │       fn = <built-in method execute of sqlite3.Connection object at      │ │
│ │            0x106b81e40>                                                  │ │
│ │ function = functools.partial(<built-in method execute of                 │ │
│ │            sqlite3.Connection object at 0x106b81e40>, 'SELECT id, name,  │ │
│ │            model, context, template, system, format FROM chat;', ())     │ │
│ │   future = <Future finished exception=OperationalError('no such column:  │ │
│ │            format')>                                                     │ │
│ │   kwargs = {}                                                            │ │
│ │     self = <Connection(Thread-5, started 6214348800)>                    │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│                                                                              │
│ /opt/homebrew/Cellar/oterm/0.1.12/libexec/lib/python3.11/site-packages/aiosq │
│ lite/core.py:106 in run                                                      │
│                                                                              │
│   103 │   │   │   │   break                                                  │
│   104 │   │   │   try:                                                       │
│   105 │   │   │   │   LOG.debug("executing %s", function)                    │
│ ❱ 106 │   │   │   │   result = function()                                    │
│   107 │   │   │   │   LOG.debug("operation %s completed", function)          │
│   108 │   │   │   │                                                          │
│   109 │   │   │   │   def set_result(fut, result):                           │
│                                                                              │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │      function = functools.partial(<built-in method close of              │ │
│ │                 sqlite3.Connection object at 0x106b81e40>)               │ │
│ │        future = <Future finished result=None>                            │ │
│ │        result = None                                                     │ │
│ │          self = <Connection(Thread-5, started 6214348800)>               │ │
│ │ set_exception = <function Connection.run.<locals>.set_exception at       │ │
│ │                 0x106b63100>                                             │ │
│ │    set_result = <function Connection.run.<locals>.set_result at          │ │
│ │                 0x106b63060>                                             │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────╯
OperationalError: no such column: format```

</details> 

Crash on empty message

To reproduce just post an empty message.

[suhr@nixos:~/oterm]$ oterm
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /nix/store/39akrbfw5pyriz81qfn7i21g0d805vj7-python3.11-oterm-0.1.16/lib/python3.11/site-packages/oterm/app/chat.py:93 in on_submit                                                                                                                          │
│                                                                                                                                                                                                                                                             │
│    90 │   │   message_container.scroll_end()                                                   ╭────────────────────── locals ───────────────────────╮                                                                                                      │
│    91 │   │                                                                                    │         chat_item = ChatItem()                      │                                                                                                      │
│    92 │   │   response = ""                                                                    │             event = Submitted()                     │                                                                                                      │
│ ❱  93 │   │   async for text in self.ollama.stream(message):                                   │             input = FlexibleInput(id='prompt')      │                                                                                                      │
│    94 │   │   │   response = text                                                              │           loading = LoadingIndicator()              │                                                                                                      │
│    95 │   │   │   chat_item.text = text                                                        │           message = ''                              │                                                                                                      │
│    96 │   │   │   message_container.scroll_end()                                               │ message_container = Vertical(id='messageContainer') │                                                                                                      │
│                                                                                                │          response = ''                              │                                                                                                      │
│                                                                                                │              self = ChatContainer()                 │                                                                                                      │
│                                                                                                │              text = ''                              │                                                                                                      │
│                                                                                                ╰─────────────────────────────────────────────────────╯                                                                                                      │
│                                                                                                                                                                                                                                                             │
│ /nix/store/39akrbfw5pyriz81qfn7i21g0d805vj7-python3.11-oterm-0.1.16/lib/python3.11/site-packages/oterm/ollama.py:43 in stream                                                                                                                               │
│                                                                                                                                                                                                                                                             │
│    40 │   async def stream(self, prompt) -> AsyncGenerator[str, Any]:                          ╭────────────────────────── locals ───────────────────────────╮                                                                                              │
│    41 │   │   context = []                                                                     │ context = []                                                │                                                                                              │
│    42 │   │                                                                                    │     ctx = []                                                │                                                                                              │
│ ❱  43 │   │   async for text, ctx in self._agenerate(                                          │  prompt = ''                                                │                                                                                              │
│    44 │   │   │   prompt=prompt,                                                               │    self = <oterm.ollama.OllamaLLM object at 0x7fffee5ac390> │                                                                                              │
│    45 │   │   │   context=self.context,                                                        │    text = ''                                                │                                                                                              │
│    46 │   │   ):                                                                               ╰─────────────────────────────────────────────────────────────╯                                                                                              │
│                                                                                                                                                                                                                                                             │
│ /nix/store/39akrbfw5pyriz81qfn7i21g0d805vj7-python3.11-oterm-0.1.16/lib/python3.11/site-packages/oterm/ollama.py:82 in _agenerate                                                                                                                           │
│                                                                                                                                                                                                                                                             │
│    79 │   │   │   │   │   raise OllamaError(body["error"])                                     ╭──────────────────────────────────────────────────────── locals ────────────────────────────────────────────────────────╮                                   │
│    80 │   │   │   │                                                                            │     body = {'model': 'mixtral:instruct', 'created_at': '2023-12-16T03:57:13.174105644Z', 'response': '', 'done': True} │                                   │
│    81 │   │   │   │   if body.get("done", False):                                              │   client = <httpx.AsyncClient object at 0x7fffee391b90>                                                                │                                   │
│ ❱  82 │   │   │   │   │   yield res, body["context"]                                           │  context = []                                                                                                          │                                   │
│    83                                                                                          │      jsn = {'model': 'mixtral:instruct', 'prompt': '', 'context': []}                                                  │                                   │
│    84                                                                                          │     line = '{"model":"mixtral:instruct","created_at":"2023-12-16T03:57:13.174105644Z","respo'+20                       │                                   │
│    85 class OllamaAPI:                                                                         │   prompt = ''                                                                                                          │                                   │
│                                                                                                │      res = ''                                                                                                          │                                   │
│                                                                                                │ response = <Response [200 OK]>                                                                                         │                                   │
│                                                                                                │     self = <oterm.ollama.OllamaLLM object at 0x7fffee5ac390>                                                           │                                   │
│                                                                                                ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯                                   │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Renaming no chat crashes oterm

How to reproduce:

  1. Ensure that there's no chats open
  2. Click Rename chat
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ /nix/store/gfh9djsd25vbrmcfdcfdl3a61c6sxbr6-python3.11-oterm-0.1.16/lib/python3.11/site-packages/oterm/app/oterm.py:68 in action_rename_chat                                                                                                                │
│                                                                                                                                                                                                                                                             │
│    65 │                                                                                        ╭────────────────────── locals ───────────────────────╮                                                                                                      │
│    66 │   async def action_rename_chat(self) -> None:                                          │ self = OTerm(title='oTerm', classes={'-dark-mode'}) │                                                                                                      │
│    67 │   │   tabs = self.query_one(TabbedContent)                                             │ tabs = TabbedContent(id='tabs')                     │                                                                                                      │
│ ❱  68 │   │   id = int(tabs.active.split("-")[1])                                              ╰─────────────────────────────────────────────────────╯                                                                                                      │
│    69 │   │   chat = await self.store.get_chat(id)                                                                                                                                                                                                          │
│    70 │   │   if chat is None:                                                                                                                                                                                                                              │
│    71 │   │   │   return                                                                                                                                                                                                                                    │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
IndexError: list index out of range

suggestion: view or export model info/chat metadata

The model selection screen makes it possible to set the model, the system prompt and the message template.

Once a chat is created it is no longer possible to view how the system prompt, the message template or other parameters are set.

Since I often play around with different system prompts,
I would like to be able to view those settings from the model selection screen later on in the UI or export them to the clipboard.

[BUG] ssh+tmux copy/paste problem

Hi! It's good to have oterm which is such a good CLI ollama interface.
When using oterm via ssh+tmux (on Windows Terminal 1.18.3181, target machine is Debian with ollama correctly installed), when I try to drag to select, the following error happens and oterm crashes:

╭────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────╮
│ /home/zmy/.local/lib/python3.11/site-packages/oterm/app/widgets/chat.py:173 in on_click                                               │
│                                                                                                                                       │
│   170 │                                                                                                                               │
│   171 │   @on(Click)                                                                                                                  │
│   172 │   async def on_click(self, event: Click) -> None:                                                                             │
│ ❱ 173 │   │   pyperclip.copy(self.text)                                                                                               │
│   174 │   │   widgets = self.query(".text")                                                                                           │
│   175 │   │   for widget in widgets:                                                                                                  │
│   176 │   │   │   widget.styles.animate("opacity", 0.5, duration=0.1)                                                                 │
│                                                                                                                                       │
│ ╭─────────────────────────── locals ───────────────────────────╮                                                                      │
│ │ event = Click(x=38, y=1, screen_x=54, screen_y=37, button=1) │                                                                      │
│ │  self = ChatItem()                                           │                                                                      │
│ ╰──────────────────────────────────────────────────────────────╯                                                                      │
│                                                                                                                                       │
│ /home/zmy/.local/lib/python3.11/site-packages/pyperclip/__init__.py:659 in lazy_load_stub_copy                                        │
│                                                                                                                                       │
│   656 │   '''                                                                                                                         │
│   657 │   global copy, paste                                                                                                          │
│   658 │   copy, paste = determine_clipboard()                                                                                         │
│ ❱ 659 │   return copy(text)                                                                                                           │
│   660                                                                                                                                 │
│   661                                                                                                                                 │
│   662 def lazy_load_stub_paste():                                                                                                     │
│                                                                                                                                       │
│ ╭─────────────────────────────────────────── locals ────────────────────────────────────────────╮                                     │
│ │ text = ' I apologize for any confusion earlier. With the new information provided, the i'+180 │                                     │
│ ╰───────────────────────────────────────────────────────────────────────────────────────────────╯                                     │
│                                                                                                                                       │
│ /home/zmy/.local/lib/python3.11/site-packages/pyperclip/__init__.py:336 in __call__                                                   │
│                                                                                                                                       │
│   333 │   class ClipboardUnavailable(object):                                                                                         │
│   334 │   │                                                                                                                           │
│   335 │   │   def __call__(self, *args, **kwargs):                                                                                    │
│ ❱ 336 │   │   │   raise PyperclipException(EXCEPT_MSG)                                                                                │
│   337 │   │                                                                                                                           │
│   338 │   │   if PY2:                                                                                                                 │
│   339 │   │   │   def __nonzero__(self):                                                                                              │
│                                                                                                                                       │
│ ╭────────────────────────────────────────────── locals ──────────────────────────────────────────────╮                                │
│ │   args = (' I apologize for any confusion earlier. With the new information provided, the i'+180,) │                                │
│ │ kwargs = {}                                                                                        │                                │
│ │   self = <pyperclip.init_no_clipboard.<locals>.ClipboardUnavailable object at 0x7fbe01576f90>      │                                │
│ ╰────────────────────────────────────────────────────────────────────────────────────────────────────╯                                │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
PyperclipException:
    Pyperclip could not find a copy/paste mechanism for your system.
    For more information, please visit https://pyperclip.readthedocs.io/en/latest/index.html#not-implemented-error

I wonder if it is caused by the tmux on the target machine? Is it possible to fix it?
Thank you for reading and please contact me if you need more information :D

Environment variables are ignored in ChatEdit screen

The symptom is: User set a custom ollama URL using an environment variable and then accessed the ChatEdit screen. It does not use the customized url to fetch model list, which results in an exception (app crash).

In oterm/app/chat_edit.py, ollama is directly imported:

import ollama

self.models = ollama.list()["models"]

Here, it uses an ollama client created in ollama python library. This client always has the default host and port 127.0.0.1:11434

Oterm is slow

Especially when a sufficiently large piece of code is being generated.

Can't Copy/paste or run generated code.

Tried out oterm (looks pretty good).
I told Mistral that I need a snake game using the pygame library.
It wrote out a bunch of code.

I could not do anything with the code.

At the very least you should be able to copy paste.
Another option is have a button when code is generated that when clicked on, allows to save generted code to file, or run the code.

Suggestion: Allow scrolling during generation

In its current state, oterm scrolls all the way to the bottom every time a new token is generated. This can be frustrating when trying to scroll up to read text while a response is still being generated.

Enhancement: Refined selection capability

Human me disclaimer

I want to see a refinement of clipboard copy operation. I wrote a general enhancement description, and refined it through GPT. I think the refined statement is clearer by far that what I wrote, and makes a lot of sense. It follows below:

Summary

The current user interface supports navigation through tabbing, mouse interactions, and scrolling to access conversation history. A gap exists when interacting with responses containing markdown quoted content, as users cannot selectively copy content within these quotes. This request aims to introduce a capability for users to choose specifically between copying only the content within markdown quotes or the entire response, enhancing interaction precision with the conversation history.

Description

Users navigate the user interface using tabbing, mouse clicks, and scrolling, enabling them to select items in the history for copying to the clipboard. In dialog sessions with a mix of responses, including markdown quoted content, the interface does not currently offer an option to copy just the content within these quotes. This limitation affects the process of retrieving specific information from extensive conversations, particularly when the information is embedded in quotes.

Enhancement Detail

The enhancement focuses on allowing users to make a "sub-select" or "recursive select" within a response that contains markdown quoted content. This added functionality will enable a more detailed level of interaction with the content, allowing users to copy only the content within the markdown quotes to the clipboard based on their selection point:

  • When a user clicks on a markdown quoted item, the system will copy only the content within the markdown quotes to the clipboard.
  • When a user clicks outside the markdown quoted area of a response, the system will copy the entire response, including the markdown quoted content, maintaining the current method of interaction for users needing to copy the response.

Assumptions

  • Dialog sessions may be lengthy and include a variety of responses.
  • Some responses will contain valuable information within markdown quotes.
  • Navigation through the application involves a combination of tabs, arrow keys, and mouse input.

Acceptance Criteria

  • Selective Copy Functionality
    • The system must enable users to copy content within markdown quotes by clicking on the quoted text, ensuring only the selected quoted content is copied to the clipboard.
  • Full Response Copy Functionality
    • When clicking outside the markdown quoted text, the system must copy the entire response, including the quoted content, thereby not altering the interaction for copying full responses.
  • Consistency Across Conversations
    • This feature must work consistently in dialog sessions of any length and complexity, facilitating specific information retrieval without the conversation's extent impacting performance.
  • Integration with Current Navigation Methods
    • The feature's introduction should not disrupt current navigation practices within the application, ensuring that it complements existing methods like tabbing, arrow keys, and mouse input.

By implementing this feature, the application will provide users with enhanced control over the content they wish to extract from the conversation history, particularly useful in dialog sessions rich in specific, quotable information.

copy to clipboard "code"

It would simply be convenient to be able to copy only code text when it is output by an ai, or even to be able to select portions of text so you can copy it in the classic way with ctrl+c

my suggestion is a bit too obvious, but from the first uses I found it limiting not being able to select and copy portions of text..

30 seconds, sometimes even 1 minute before it can copy the text produced by the AI.

When the bot completes a response, especially if it was long, at least 30 seconds and sometimes even a minute pass before i can copy the text into the clipboard by clicking on it. Scrolling is also very slow, it seems related to memory running out or something by the AI after it has made a response, but I don't understand what it has to do with not being able to interact with the text or even with scrolling it, given that the operation with oterm is completed by the AI. I was wondering whether the two things can be made independent, or in any case made so that once the writing operation by the AI is completed, oterm immediately becomes available to be able to carry out other operations by the user.

Configuration file

I think it would be super nice to add the posibility to specify the location to store the chat history. The config file could be inside ~/.ollama or ~/.config/oterm.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.