Giter VIP home page Giter VIP logo

aiui.nvim's Introduction

aiui.nvim

A unified set of modules to interact with different LLM providers.

Why aiui.nvim?

Unify your development experience across different LLM providers with aiui.nvim's adaptable UI modules, allowing for easy model switching without changing your workflow.

Features

  • Unified LLM Interface: Swap LLM providers on-the-fly while keeping your workflow consistent.

  • In-Editor Chat: Engage with LLMs in a familiar chat interface inside neovim. chat_demo

  • Single buffer Difference: Visualize LLM-suggested code changes directly within your buffer, akin to a git diff. diff_demo

  • Fuzzy seach to select chats: Fuzzy search-enabled model and instance switching or resuming past chats.

  • Conversations as files: Store chat logs as readable markdown and session data as json for external access.

Checkout the roadmap for upcomming features

Getting Started

Assuming you are using Lazy.nvim

{
  "MLFlexer/aiui.nvim",
  dependencies = {
    "nvim-lua/plenary.nvim",
    "nvim-telescope/telescope.nvim",
  },

  init = function()
    --adds default keybindings and initializes
    require("aiui").add_defaults()

    -- If NOT using the default setup:
    -- add you LLM provider
    -- local ModelCollection = require("aiui.ModelCollection")
    -- local ollama_client = require("models.clients.ollama.ollama_curl")
    -- ModelCollection:add_models(ollama_client:get_default_models())

    -- Add any agents you like
    -- ModelCollection:add_agents({
    -- 	default_agent = "You are a chatbot, answer short and concise.",
    -- })

    -- Initialize the Chat and set default keybinds and autocmds
    -- local Chat = require("aiui.Chat")
    -- Chat:new({
    --   name = "Mistral Tiny",
    --   model = "mistral-tiny",
    --   context = {},
    --   agent = "default_agent",
    -- })
    -- Chat:apply_default_keymaps()
    -- Chat:apply_autocmd()
  end,
}

Need help? Checkout how the default setup is done in: aiui/defaults.lua or ask in the Discussions tab.

Adding your own LLM client

This section is unfinished, however you should implement the function annotations for the ModelClient. Need help, see clients directory or ask in the Discussions tab.

Roadmap

Chat Features

  • Highly customizable.
  • Support for concurrent chat instances.
  • Persisting and retrieving chat history.
  • Code reference shortcuts (like @some_function or /some_file) within chats.
  • New chat creation and retrieval via fuzzy search.
  • Real-time chat streaming.
  • Popup chat window.
  • Buffer chat window.

Inline Code Interactions

  • Integrated diff views for in-buffer modifications.
  • Quickly add comments, fix errors, ect. for visual selection.
  • LSP interactions to fix errors or other LSP warnings.
  • Start a Chat window with the visual selection.

aiui.nvim's People

Contributors

mlflexer avatar

Stargazers

 avatar

Watchers

 avatar

Forkers

toffernator

aiui.nvim's Issues

Improve readability of testing client

Improve the readability of the testing clients outputs. Might be usefull to make it echo the input and/or full context, but without spawning a process, just plain lua

Make '#' and '@' symbols refer to code in a chat window

Depends on #2

The # and @ symbols should be used to refer to code, such that the user can use it as a shorthand to input it into a prompt.

The current best idea as how to do this is to use the LSP to query for function/method names and then replace that with the actual code when prompting the LLM

Add guide for users to add own clients

There should be a written markdown guide which helps users add their own clients, such that it is easy to exstend with you own model.

Might be usefull to add to the wiki

Add footer to output floating window with release 10.0

When you click on the output window, it focuses, and because the southern border does not have a title, then the title is removed from the input window, see:

Expected:

image

Actual

image

fix

Add a footer to the output window options, but is only available in neovim 10.0, see PR

Ability to fix diagnostics with an LLM

A user should be able to hit a keybind when on a line with an LSP error, and then the relavent information about that error should be passed on to an llm which should try to fix it.

fuzzy pick saved chats

A user should be able to fuzzy find over the directory of saved chats and then be able to pick a chat to load into the chatwindow

nvim_win_set_config() changes the position of the windows

In the function change_instance the window config is changed. I would like to use

vim.api.nvim_win_set_config(self.output.window_handle, self.output.window_opts) 
vim.api.nvim_win_set_config(self.input.window_handle, self.input.window_opts)   

However this lowers the placement of the output window, and thus I have resulted to use a double call to toggle as a temporary fix for this.

Redesign the API module

Add ability to list all models
add ability to keep track of loaded chats with the context, so a user can continue a chat.
add ability to list all loaded chats

Ability to reference a file or buffer as promptinput

There should be an easy way for the user to reference a file in the current working directory.
It might also be useful to reference a buffer, as it enables the user to input input from something like a terminal buffer.

Build context when streaming instead of when the job finishes

To improve performance the message/context from the LLM could be created on the fly when processing stdout. ATM this is done by exstracting the right information from all the lines after the job has finished.

It would improve performance if it was done with stdout, as we do not need to exstract the relevant strings twice

Text goes off viewable area in chatwindow

When a user has a chat which uses more lines that available viewable lines in the buffer the last lines are not shown.
The expected behavior is to always show the last lines

Add ability to diff text inline

Add the ability to diff text inline, not 2 buffers, one showing additions and the other deletions, but inline in a single buffer.

This enables a user to easily view text changes, which could be changed by an LLM prompted with: "fix the bug in the following text " or alike.

Save confirmation if chat buffer has been modified

If you modify the output buffer of the chat window, and then try to close neovim, then the user is notified if they wish to save the modified output buffer. This is undesired behaviour, as it should either be automatically saved to a file with each write, thus not prompting the user. Alternatively it should silently close the buffer without confirmation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.